Streaming platforms often say they can't moderate content until it's reported, using "safe harbor" laws as protection. But should platforms that profit from content have more responsibility to actively monitor for harmful material? Or would that create an impossible standard that would shut down smaller platforms?
Streaming platforms often say they can't moderate content until it's reported, using "safe harbor" laws as protection. But should platforms that profit from content have more responsibility to actively monitor for harmful material? Or would that create an impossible standard that would shut down smaller platforms?