🙋 Polls

Do you agree with California’s proposed regulation that would require companies to thoroughly test their advanced AI models for safety prior to public release?

Tuesday, Sep 3

Do you agree with California’s proposed regulation that would require companies to thoroughly test their advanced AI models for safety prior to public release?

👍 Yes (66%) – "A responsible company, regardless of product, should avoid releasing dangerous products to the public, and expect repercussions from harm done by its products."

  • "Certainly, at this stage, it is unlikely to be foolproof testing. However, this lays the groundwork for repercussions if no testing is done and there are significant issues after the fact and anyone that provides things for free has some culpability to ensure it is not dangerous. If a company gave away free guns to the mentally ill society would be foolhardy to absolve them of guilt if someone died as a result."

"AI is a fantastic piece of equipment when used correctly, but the simple fact is it is already being used incorrectly by numerous people. The use of AI to impact things like elections or promote misinformation, generating false voices or false images to create plausible, false narratives is terrifying. And the fact that it is currently unregulated only stands to impose greater, more severe dangers to us all. Sometimes, yes, the individual who misuses the tool is the one at fault, but in this case, regulation at the development level is necessary. "Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” The best quote to sum it up from Jurassic Park. AI has serious, dangerous potential. Unstifled innovation is not always a good thing, and I hope more stringent laws continue to be passed."

  • "Even though people should be accountable for their own choices, such as those who might seek out nepharious information from AI bots, these layperson by and large don't understand the technology they're using. Ultimately it's the technology developers who must use their advanced knowledge to ensure that only a safe product is released to the public. Any tools they make available could be used by the public, and by making the tools available, they need to take responsibility for their use. Some level of regulation is important here, just as cars and drugs are safety tested before being released."

"We test and regulate so many other things - drugs, food, privacy (though not as much as we should), caller ID, many other forms of technology; it seems silly to me to think ai doesn't need any regulation either. However, what DOES concern me is what that regulation looks like, because like any form of regulation it could overstep. At this point, the regulations seem very fair. The right question, I think, is what aspects of AI do we think need to remain unregulated and how do we protect those 'rights' so to speak?"

👎 No (15%) – "I believe trying to harness AI away from harmful activities is an impossible tasks since it constantly will change as it learns and the programs cannot foresee every instance how it might change, depending on the programs 'resilency'. Like trying to control crime by making laws this is just another fruitless measure for 'window dressing'. Can we sue the government of California when their testing and laws fail to protect people from nefarious users? Everything can be used for 'bad' purpose if you're a 'bad' person."

  • "The AI we have today is just Machine Learning which is not intelligence. It is simply a tool to be used by people. The computer is a tool, did we create laws when we first developed the computer on how it was developed or do we enact laws on the people that use it?"

🤷 Unsure/other (19%) – "Although the bill is great in theory, it's hard to legislate and becomes a slippery slope. What one company, state, or governmental agency deems as "catastrophic" won't be deemed as such by all and it will bog our court systems down with frivolous lawsuits claiming that AI is or isn't responsible for the behavior of an individual. Anyone developing an AI model should exercise some degree of quality control in their technology development but ultimately, once the tool is developed, it's incredibly difficult to control how people use it."

  • "Government regulation has historically been abysmal at correctly pinning down technology. The wording "certain computing thresholds" certainly gives me pause, as I have no confidence that anyone drafting legislature understands what a computing threshold is, let alone how to define it. It's a moving target, and legislature written by lawyers is too slow to keep up. However, I agree with the sentiment and hope we can figure out a way to put a leash on AI."
Share this!

Recent Polls stories

🙋Polls

Polls
  |  August 29, 2024

Calling all classrooms: How would you deal with increasing rates of burnout and stress among teachers in America?

Our question to you (long-form): Calling all 25,000+ classrooms using the DONUT: How would you deal with increasing rates of burnout and stress among teachers in America?

Kyle Nowak
Read More

🙋Polls

Polls
  |  August 27, 2024

In your opinion, will RFK Jr.’s decision to drop out of the race and endorse Trump have a material impact on the result of the 2024 presidential election?

Our question to you: In your opinion, will RFK Jr.’s decision to drop out of the race and endorse Trump have a material impact on the result of the 2024 presidential election?

Kyle Nowak
Read More

🙋Polls

Polls
  |  August 26, 2024

Should noncompete agreements be legal or illegal for US employees below the senior executive level?

Our question to you: In your opinion, should noncompete agreements be legal or illegal for US employees below the senior executive level?

Kyle Nowak
Read More

You've made it this far...

Let's make our relationship official, no 💍 or elaborate proposal required. Learn and stay entertained, for free.👇

All of our news is 100% free and you can unsubscribe anytime; the quiz takes ~10 seconds to complete