Use Yahoo Open NSFW to detect the possibility that your image is not suitable for work (pornographic content). Detection ranges from 0 to 100%.
Possible Use Cases:
- Automatically flag inappropriate content uploaded in your online community.
- Block inappropriate images from being loaded for a child-safe environment.
- Continue fine tuning this prediction model on your own dataset to detect a broader set of NSFW content.
NSFW Probability: 0% | NSFW Probability: 15.2% | NSFW Probability: 0.09% |
---|---|---|
As far as we can tell, this is the only ML solution to detecting not suitable for work (NSFW) content. However, since NSFW content is highly subjective, we recommend testing the algorithm on your own images, which you can easily do through running the notebook tutorial below. Different contexts will require different cutoffs for what probability constitutes NSFW content.
Yahoo has shown that this model accidentally classifies 7% of images as potentially NSFW for an undisclosed cutoff they used in testing.
Code licensed under the BSD 2 clause license. See Source Code for more details.
Model trained under ImageNet 1000 class dataset and then fine tuned on a proprietary NSFW dataset not released by Yahoo.