Safety & Security Internal Measures
Safety & Security Backgrounder
We are committed to making our sites safer for our members and our community. Our
websites are focused on increasing the safety and security of the sites and on preventing those intent on
committing illegal activity such as fraud or human trafficking through the sites. This process is ongoing and is backed by a team, that works
diligently on safety and security related initiatives across the site, as well as the latest technologies, such as artificial intelligence, who help effectively decrease this unwanted type of behavior.. The following are some examples of recent
safety measures that our websites had implemented.
Implementing Stronger Policies to Prevent Illegal Activity
Implemented no nudity policy.
Implemented strict content policies to prevent illegal activity
Implemented stricter image content standards
Implement a combination of automated passport/id verification and manual human operated passport/id verification by assessing a picture with the customer holding his or her passport or id. This can be enabled everywhere, but will be enabled at first in all adult categories. This is to determine the minimum age is verified.
Other non-disclosed measures to determine if a verified account is not being abused by another person than the one who has been verified.
Implement artificial intelligence on textual and visual content to detect any possible illicit elements in ads. All ads will be screened by AI after posting to ensure 100% screening. Some detected behavior will result in instant rejection, other detected elements will be red flagged and forwarded to a human moderator to make the best possible assessment. Our used AI will automatically identify new types of
inappropriate content as scammers and posters get more savvy, allowing your moderation team to stay ahead of the curve and focus their energies on higher-impact activities.
Preventing Inappropriate Content
• Images are reviewed for compliance with content policies
Our AI is able to ‘learn’ Adster image content policies and automatically detect images that fall
outside of the policy. This has allowed similar P2P marketplaces to evaluate 10x more images at a
fraction of the cost and time it takes human moderators to evaluate the same images.
• Keyword searches conducted across site to locate inappropriate or illegal content
People responsible for these searches will be prompted with a warning and no results will be provided. Repeating behavior could be penalized with an IP block.
• Banned inappropriate terms list utilized to identify and prevent illegal content
Any inappropriate items/terms that are shown in images or in text can be detected and automatically removed.
• Child exploitation response process used to prioritize child related matters
Adster Demographics Model can also automatically identify pictures with Children and flag it to our team in real time.
Increasing Online Classified Ad Controls to Prevent Abuse
• Inappropriate ad content removed
AI can be taught to recognize inappropriate images on ads and automatically alert the moderation
team to its existence for removal. The moderation team benefits in time and energy since ads that fall
outside of the content policy can be brought to their attention immediately.
• Known bad URLs blocked from being posted on site
• HTML images blocked in ads (except for trusted users)
HTML images can be processed through AI API, automatically identifying inappropriate material
and/or spam, should Adster choose to allow HTML images in the future.
• Character limit on ads
• Users consistently posting inappropriate imagery will be flagged by AI.
• All ads edited by user after initial review and approval are screened again.
• Built tool to restrict ad poster capabilities for policy violations
• Suspicious URLS linked to on site destinations manually reviewed by staff for appropriateness
• ad moderator accountability system: the ad moderator accountability system can include a new visual recognition data point. As moderators remove or allow posts that AI has flagged, it can be tracked against that moderator’s performance.
• CAPTCHAs added to report abuse process to prevent abuse reporting misuse
Planned Partnering with Law Enforcement and Safety Advocates/Experts
• All law enforcement inquiries involving minors given first priority
• AI Demographics model can automatically identify pictures with minors, pushing alerts to
moderators to take actions with the required authorities.
• Ads containing possible minors investigated and referred to nonprofit organizations.
Our team is happy to work with them directly.
Planning to Create an automated process to quickly report ads suspected of child exploitation to suited organization locally.
• Created a process for public users to report illegal postings and to include detailed information to be forwarded to our human moderators for review. (community ad report function