Create channel settings by a custom channel type
You can create the settings for channels with the specified custom channel type using this API. If a custom type isn't specified for a channel, global application settings are applied to all channels by default.
HTTP Request
Request body
The following table lists the properties of an HTTP request that this action supports.
Properties
Required | Type | Description |
---|---|---|
custom_type | string | Specifies the custom channel type. |
Optional | Type | Description |
---|---|---|
display_past_message | boolean | Determines whether to display past messages to new members of a group channel. If set to |
allow_links | boolean | Determines whether to allow clickable links in a message within the application. (Default: |
max_message_length | integer | Specifies the maximum character length of a message allowed to be sent within the application. Acceptable values are |
user_messages_per_channel | integer | Specifies the maximum number of messages which a user is allowed to send in a channel during the time duration set in |
user_messages_per_channel_duration | integer | Specifies the time duration in seconds in which a user can send the set number of messages in a channel. This property works in conjunction with |
nested object | A domain filter configuration to filter out text and file messages with URLs that contain the domain set. | |
domain_filter.domains[] | array of strings | Specifies an array of domains to detect. Each item of the array should be specified in a combination of domain name and TLD (top level domain) like |
domain_filter.type | integer | Determines which filter to apply to messages with URLs that contain any of the domain set. Acceptable values are the following. |
nested object | A filter configuration on certain words and patterns for matching character combinations in strings, which are not allowed to be used within the application. | |
profanity_filter.keywords[] | array of strings | Specifies an array of words to detect. |
profanity_filter.regex_filters[] | array of strings | Specifies an array of regular expressions used for detecting. Each item of the list should be specified in |
profanity_filter.type | integer | Determines which filtering method to apply to messages that contain the specified keywords or regular expressions. Acceptable values are the following. |
nested object | A moderation configuration on which penalty is automatically imposed on users who reach the profanity violation limit within a channel. | |
profanity_triggered_moderation.count | integer | Specifies the number of profanity violation limit before a penalty is imposed on a user. |
profanity_triggered_moderation.duration | integer | Specifies the duration of the time window in seconds which counts the number of a user’s violations within a channel. For example, if the |
profanity_triggered_moderation.action | integer | Determines the type of moderation penalty within a channel which is permanently imposed on users until canceled. Acceptable values are |
nested object | A moderation configuration on inappropriate images in the application. Google Cloud Vision API is used for image moderation and supports many types of images. | |
image_moderation.type | integer | Determines the moderation method to apply to the images and image URLs in the text and file messages. Acceptable values are the following. |
image_moderation.soft_block | boolean | If set to |
image_moderation.limits | nested object | A set of values returned after an image has been moderated. These limit numbers range from one to five and specify the likelihood of the image passing the moderation standard. Acceptable likelihood values are |
image_moderation.limits.adult | integer | Specifies the likelihood of the image containing an adult content. |
image_moderation.limits.spoof | integer | Specifies spoof likelihood which is the likelihood that a modification was made to the image to make it appear funny or offensive. |
image_moderation.limits.medical | integer | Specifies the likelihood of the image being a medical image. |
image_moderation.limits.violence | integer | Specifies the likelihood of the image containing violent content. |
image_moderation.limits.racy | integer | Specifies the likelihood of the image containing racy content. |
image_moderation.check_urls | boolean | Determines whether to check if the image URLs in the text and file messages are appropriate. |
push_template | nested object | A configuration of a push notification template for the specified custom channel type. |
push_template.MESG | string | Specifies the message content to be displayed in the push notifications for user messages sent in the specified custom channel type. You can customize the message with variables like |
push_template.FILE | string | Specifies the message content to be displayed in the push notification for file messages sent in the specified custom channel type. You can customize the message with variables like |
push_template.ADMM | string | Specifies the message content to be displayed in the push notification for admin messages sent in the specified custom channel type. You can customize the message with variables like |
Responses
If successful, this action returns the settings for channels with the custom type in the response body.
Error
In the case of an error, an error object like below is returned. See the error codes section for more details.