Headless Integration
In addition to the standard integration, we offer endpoints in case you want to build your own UI for different flows:
- community reports
 - statements of reason and appeals
 - moderation decisions
 - inquiries
 
To call these endpoints, follow the same principles as the standard integration.
Community Reports
Checkstep API can be used to receive reports from members of the community. These are valuable sources of information as they mainly contain context that AI models could fail to detect, and a high volume of community reports can also be an alarm for an emergency.
An example payload:
{
  "id": "3e5cb4b2-2fbd-442d-a036-3ef28760f699",
  "type": "post",
  "reporter": "john@doe.com",
  "reason": "Remove this hateful post please !",
  "potential_violations": [
    {
      "policy": "HTE"
    }
  ]
}
The id supplied as part of the request should be the ID of the content on the platform.
If a piece of content with the same type and id has already been sent to Checkstep, the community report will be
associated with that piece of content, otherwise the community report will still exist but in isolation.
Statement of reason
To provide transparency to your users, you can call the statement of reason endpoint. In case a content was removed, it will provide you information about the moderation decision associated with the removal.
You will also get the eventual status of an appeal.
Statuses are, in order :
no-incident: no incident detected yet on this content caseenforced: a human or automated decision has enforced this content,sorsection is providedappealed: an appeal has been submitted and is pending for review,appealsection is providedappeal-ruled: the appeal has been reviewed,appealRulingsection is provided
An example of a JSON response:
{
  "id": "f04d6dbc40ec2839d5a7f38c7aad902f6ea092e8",
  "status": "appeal-ruled",
  "content": {
    "timestamp": "2023-12-19T16:27:39.806132Z",
    "fields": [
      {
        "id": "text",
        "type": "text",
        "src": "Offensive text"
      }
    ]
  },
  "sor": {
    "timestamp": "2023-12-19T16:28:57.568469Z",
    "violations": [
      {
        "policy": {
          "name": "Hate Speech",
          "code": "HTE",
          "description": "Our platform is not a place for ..."
        },
        "field": "text"
      }
    ]
  },
  "appeal": {
    "timestamp": "2023-12-20T13:04:51.209658Z",
    "reason": "I don't agree with the decision !"
  },
  "appealRuling": {
    "timestamp": "2023-12-20T13:07:04.890362Z",
    "accepted": false
  }
}
Appeal
When your users do not agree with a moderation decision, they should have the right to appeal. Call the appeal endpoint to submit an appeal.
An example payload:
{
  "id": "3e5cb4b2-2fbd-442d-a036-3ef28760f699",
  "type": "post",
  "timestamp": "2023-11-16T13:42:46.123Z",
  "statement": "I don't agree with the decision, bring my content back !",
  "author": "a672bd2a-d103-46e7-a76a-77b431bc48b7"
}
The above snippet returns a 202 Accepted response status code
Decisions
If you've made a decision about the content or author outside Checkstep, send us the related information to ensure transparency.
About content
An example payload:
{
  "id": "f8cdba92-7b53-4b14-83ac-4a168aaca4c3",
  "type": "user-profile",
  "decision": "act",
  "moderator": "ae484ce1-1ad5-487e-a112-e5d73a925e45",
  "violations": [
    {
      "field": "firstName",
      "policy": "HTE"
    }
  ]
}
The below list contains all options for decision:
actyou took down the contentdismissyou reviewed the content and found no violationescalateyou decided to escalate the case to an upper leveloverturnyou restored a taken-down contentupheldyou rejected users appeal and kept the content down
About author
{
  "author": "f8cdba92-7b53-4b14-83ac-4a168aaca4c3",
  "decision": "act",
  "moderator": "ae484ce1-1ad5-487e-a112-e5d73a925e45",
  "violations": [
    "HTE",
    "SXC"
  ]
}
The below list contains all options for decision:
actyou banned authoroverturnyou canceled the ban for authorescalateyou decided to escalate the case to an upper level
Inquiries
You can open new incident on the content by creating inquiry. This can be useful if you want to let your moderators review some content which was not identified as potentially violating by automation.
An example payload: