Image moderation
Sendbird’s image moderation is powered by Google Cloud Vision API. This feature is used to moderate text and file messages with explicit images or inappropriate image URLs. It uses five categories to moderate images: adult
, spoof
, medical
, violence
, and racy
. Image moderation doesn't apply when uploading channel images or profile images to Sendbird server.
After an image is uploaded and moderated, the feature returns limit values. These numbers range from one to five for each category and corresponds to how likely the image will be blocked. The following shows what each limit means:
Limit | Description |
---|---|
1 (very unlikely) | The probability of the image getting blocked is very unlikely. |
2 (unlikely) | The probability of the image getting blocked is unlikely. |
3 (possible) | The probability of the image getting blocked is possible. |
4 (likely) | The probability of the image getting blocked is likely. |
5 (very likely) | The probability of the image getting blocked is very likely. |
You can test different images on the Google Cloud Vision API's try-it tool to see how moderation works and determine which image moderation settings suit your needs.
Note: This feature may not work on culturally sensitive images such as those related to religion, drugs, or weapons.
HTTP request
Parameters
The following table lists a parameter that this action supports.
Required
Parameter name | Type | Description |
---|---|---|
custom_type | string | Specifies the custom channel type to apply a set of settings to a channel. |
Request body
The following table lists the properties of an HTTP request that this action supports.
Properties
Optional | Type | Description |
---|---|---|
image_moderation | nested object | Specifies a moderation configuration to moderate inappropriate images in the application. This feature is powered by Google Cloud Vision API, which supports various image types. |
image_moderation.type | int | Determines which moderation method to apply to images and image URLs in text and file messages. Acceptable values are the following: |
image_moderation.soft_block | boolean | Determines whether to moderate images in messages. If true, the moderation method set by the |
image_moderation.limits | nested object | Specifies a set of values returned after an image has been moderated. These limit numbers range from one to five and indicate the likelihood of the image passing the moderation standard. (Default: |
image_moderation.limits.adult | int | Specifies the likelihood that the image contains adult content. |
image_moderation.limits.spoof | int | Specifies the likelihood that the image contains spoof. |
image_moderation.limits.medical | int | Specifies the likelihood that the image contains medical content. |
image_moderation.limits.violence | int | Specifies the likelihood that the image contains violent content. |
image_moderation.limits.racy | int | Specifies the likelihood that the image contains racy content. |
image_moderation.check_urls | boolean | Determines whether to check if the image URLs in text and file messages are appropriate. This property can filter URLs of inappropriate images but it can’t moderate URLs of websites containing inappropriate images. For example, image search results of adult images on “google.com” will not be filtered. |
If you want to turn off the image moderation, send a PUT
request with a value of the type
property set to 0 as shown below:
When a message has passed the image moderation, Sendbird server sends back a success response body like the following:
When a message has been blocked by image moderation, Sendbird server sends back an error response containing the 900066 (ERROR_FILE_MOD_BLOCK)
or 900065 (ERROR_FILE_URL_BLOCK)
code.
Response
If successful, this action returns the updated moderation settings or channels with a custom channel type in the response body.
In the case of an error, an error object is returned. A detailed list of error codes is available here.