This API offers an advanced system for automatically detecting offensive language in texts, allowing for the accurate identification of inappropriate words, insults, and potentially harmful expressions. Its main objective is to help evaluate texts in real time, improving the safety and quality of user-generated content. When a text is sent as input, the API returns a structured analysis that includes several indicators essential for understanding the level of toxicity present in the message.
One of the most important attributes is isProfanity, a Boolean value that indicates whether the text contains offensive or inappropriate language. It also provides a score, a quantitative metric that represents the estimated probability that the text will be considered profane. This value is especially useful in environments where moderation thresholds need to be adjusted for different contexts or audiences.
The API also includes a severity field, which classifies the level of severity of the detected language. This classification allows for distinguishing between mild cases, such as colloquial expressions, and more serious situations, such as direct insults or highly toxic language. For clarity, the response also specifies flaggedFor, a set of categories that explains the exact reason why the text was flagged, allowing automated systems to make more informed decisions.
Analyzes text and detects offensive language, returning severity, score, marked category, language, and whether the content should be considered profane or inappropriate.
Text Analyzer - Endpoint Features
| Object | Description |
|---|---|
Request Body |
[Required] Json |
{"isProfanity":true,"score":0.8,"severity":70,"flaggedFor":["insult"],"language":"en","dialect":"general"}
curl --location --request POST 'https://zylalabs.com/api/11449/extract+text+content+censorship+api/21609/text+analyzer' --header 'Authorization: Bearer YOUR_API_KEY'
--data-raw '{
"message": "I hate you"
}'
| Header | Description |
|---|---|
Authorization
|
[Required] Should be Bearer access_key. See "Your API Access Key" above when you are subscribed. |
No long-term commitment. Upgrade, downgrade, or cancel anytime. Free Trial includes up to 50 requests.
The Text Analyzer endpoint returns structured data that includes indicators of offensive language in the analyzed text. Key outputs include whether the text contains profanity, a severity score, a risk score, flagged categories, the language of the text, and the identified dialect.
The key fields in the response data are `isProfanity` (Boolean), `score` (numeric), `severity` (numeric), `flaggedFor` (array of categories), `language` (string), and `dialect` (string). These fields provide insights into the nature and severity of the detected offensive language.
The response data is organized in a JSON format, with each key representing a specific aspect of the analysis. For example, `isProfanity` indicates if the text is offensive, while `flaggedFor` lists the reasons for flagging, allowing for easy parsing and interpretation by automated systems.
The Text Analyzer endpoint provides information on the presence of offensive language, severity levels, risk scores, categories of flagged content, and the language and dialect of the text. This comprehensive analysis aids in understanding the toxicity of user-generated content.
Users can customize their data requests by adjusting the input text they send to the Text Analyzer endpoint. While the endpoint does not accept additional parameters, the content of the text itself can vary to test different scenarios and analyze various types of language.
Typical use cases for the Text Analyzer data include content moderation for social media platforms, filtering user comments on websites, enhancing chat applications to prevent harassment, and ensuring compliance with community guidelines by identifying toxic language.
Data accuracy is maintained through continuous updates to the underlying language models and regular evaluations against diverse datasets. This ensures that the API can effectively recognize and classify offensive language across different contexts and dialects.
Standard data patterns in the response include a clear indication of whether the text is profane (`isProfanity`), a numeric score reflecting the severity of the language, and a list of categories in `flaggedFor`. Users can expect consistent formatting and structure in the JSON response.
Please have a look at our Refund Policy: https://zylalabs.com/terms#refund
To obtain your API key, you first need to sign in to your account and subscribe to the API you want to use. Once subscribed, go to your Profile, open the Subscription section, and select the specific API. Your API key will be available there and can be used to authenticate your requests.
You can’t switch APIs during the free trial. If you subscribe to a different API, your trial will end and the new subscription will start as a paid plan.
If you don’t cancel before the 7th day, your free trial will end automatically and your subscription will switch to a paid plan under the same plan you originally subscribed to, meaning you will be charged and gain access to the API calls included in that plan.
The free trial ends when you reach 50 API requests or after 7 days, whichever comes first.
No, the free trial is available only once, so we recommend using it on the API that interests you the most. Most of our APIs offer a free trial, but some may not include this option.
Yes, we offer a 7-day free trial that allows you to make up to 50 API calls at no cost, so you can test our APIs without any commitment.
Zyla API Hub is like a big store for APIs, where you can find thousands of them all in one place. We also offer dedicated support and real-time monitoring of all APIs. Once you sign up, you can pick and choose which APIs you want to use. Just remember, each API needs its own subscription. But if you subscribe to multiple ones, you'll use the same key for all of them, making things easier for you.
Service Level:
100%
Response Time:
170ms
Service Level:
100%
Response Time:
1,079ms
Service Level:
100%
Response Time:
99ms
Service Level:
100%
Response Time:
204ms
Service Level:
100%
Response Time:
270ms
Service Level:
100%
Response Time:
324ms
Service Level:
100%
Response Time:
301ms
Service Level:
100%
Response Time:
280ms
Service Level:
100%
Response Time:
21ms
Service Level:
100%
Response Time:
332ms
Service Level:
100%
Response Time:
693ms
Service Level:
100%
Response Time:
693ms
Service Level:
100%
Response Time:
0ms
Service Level:
100%
Response Time:
686ms
Service Level:
100%
Response Time:
491ms
Service Level:
100%
Response Time:
0ms
Service Level:
100%
Response Time:
0ms
Service Level:
100%
Response Time:
0ms
Service Level:
100%
Response Time:
693ms
Service Level:
100%
Response Time:
0ms