Los Angeles Artificial Intelligence (AI) & Emerging Technology Harm Lawyers

Holding AI developers and businesses accountable for the physical, financial, and psychological harm they cause

AI use across the country is evolving.  Many of the companies developing and using AI programs are based here in California.  At McNicholas & McNicholas, LLP, we represent victims who have been harmed by the misuse of AI and other technology.  Our lawyers are staying current as AI laws that protect consumers continue to evolve.

If you’ve been physically or emotionally harmed or financially affected because of the misuse of AI, you may have a legal claim against the companies that develop, sell, and use AI programs.

Examples of AI harm include:

  • Emotional, reputational, and other harm from “Deep Fakes”.
  • Physical harm caused by AI products that malfunction.
  • Unlawful employment discrimination resulting from AI-driven decision-making in violation of California’s Fair Employment and Housing Act.
  • Emotional and physical harm because social media platforms failed to protect your child from predators who use AI to sexually exploit children.

Our Los Angeles AI and emerging technology lawyers can help determine if you have a legal claim.

What is artificial intelligence?

Artificial Intelligence (AI) refers to computer programs designed to perform tasks that typically require human intelligence, sometimes using data to improve performance over time. Artificial intelligence has many beneficial uses, from helping diagnose diseases to making business transactions more effective.  But AI can be abused and misused.  Developers and users may be able to use AI images, voiceovers, and content to deceive consumers, adults of all ages, children, organizations, businesses, and others.

How can artificial intelligence harm individuals?

Companies are investing billions in AI, but the benefits come with a price.  AI creates many dangers and opportunities for abuse.

Some of the current dangers include the following:

  1. Social manipulation. AI makes it hard to know what is real and not real.  AI algorithms that social media platforms and many websites use already focus/manipulate the content you see, based on prior searches.  AI-generated images and videos, along with voice modules and deepfakes, can affect the way users think about politics, work, daily life, medicine, and many other issues.
  2. Social surveillance. AI has privacy implications.  AI, such as facial recognition technology used in China, can track a person’s movements, relationships, and other aspects of their lives.  There are concerns about how law enforcement uses AI to target individuals instead of focusing on specific crimes.
  3. The lack of data privacy. AI software may collect consumer, medical, economic, and social data. There are concerns about whether this data is being appropriately used and properly secured.
  4. AI biases. The demographics of AI developers can affect AI results.
  5. Criminal activity. Voice cloning-enabled phone scams may increase with AI.
  6. Child safety concerns. This includes concerns about online sexual abuse and exploitation and the online privacy of children, which are expanding.
  7. Psychological harm. AI may increase the risk that humans will not be able to think and react critically, as users over-rely on AI to make decisions for them. People may be inclined to use AI-generated content to treat health conditions instead of making an appointment with a healthcare professional.
  8. There are concerns that AI may become self-aware – and “act beyond humans’ control — possibly in a malicious manner.” Current AI systems do not have legal personhood, intent, or consciousness under existing law.
  9. Intellectual property infringement. There are already lawsuits about the ownership of creative works. Many AI projects train on large “datasets scraped from the internet, including often ingesting copyrighted books, articles, code, and artwork without the original creators’ consent or compensation.”
  10. Nonconsensual image generation. AI can already create “deepfake pornography, which can be used for harassment or worse.”

Why should AI companies be held accountable for causing individual harm due to fraud or AI misuse?

At McNicholas & McNicholas, LLP, we are working with AI experts to understand how AI works and how AI could have caused you or a loved one harm. Innovation shouldn’t be a free ride. Developers and deployers of AI programs should understand how their AI technology can be abused or misused – and take appropriate precautions.

The specific precautions depend on the possible abuses, fraud, or types of harm.

For example, designers of self-driving cars should understand when and how drivers should override any AI technology. Social media designers could provide age verification and other checks to ensure predators don’t pretend to be people they’re not. Designers of financial or employment programs should anticipate why human oversight is necessary to prevent bias.

For starters, AI developers should ask: What are the risks of using my program, and how can I protect consumers from those risks?

class action law firm

How can lawyers protect victims of AI fraud, abuse, and negligence?

As AI is quickly evolving, the federal and state governments are working to create laws to regulate AI and protect consumers from AI abuse.  Cases involving AI harm are increasingly being brought in courts throughout the country.  A recent conversation with an assistant professor at Drexel’s Kline School of Law (in Philadelphia, PA) explores how the legal system is addressing harmful AI technologies.

Litigation

Legal actions involving AI are growing at a rapid pace.  Long-standing tort law doctrines are being applied in AI-related actions.  Common theories of liability include:

  • Strict liability. Strict liability may apply in limited circumstances, particularly where AI is integrated into a physical product, but courts are still determining whether standalone AI software or platforms qualify as a “product” under California law.
  • Negligence. Developers, deployers, or users of AI could be liable if they acted unreasonably under the circumstances and did not meet the applicable standard of care.

According to the Drexel assistant professor, one of the problems with AI claims is that:

Because many AI systems lack explainability, it can be difficult to establish a clear causal link between the system’s behavior and the resulting harm, making negligence claims especially challenging, particularly when assessing whether the harm was truly foreseeable.  Even so, tort law has repeatedly shown its ability to evolve alongside new technologies, and it is likely to do so again in the context of AI.

Regulation

The laws regulating AI are evolving, too.

Colorado and California offer two leading examples, each taking a different path: Colorado has adopted a comprehensive, consumer-focused framework aimed at preventing discriminatory outcomes, while California has pursued a series of more targeted bills addressing issues such as transparency, deepfakes, and employment-related discrimination.

Some California laws and existing legal theories may support civil claims in certain AI-harm contexts, while other AI-related requirements are enforced primarily by government agencies.

Copyright

Authors, musicians, and other creatives may be harmed by AI misuse.  Copyright infringement lawsuits against companies that develop generative AI systems continue to be litigated and settled.  Core doctrines, like fair use, direct and indirect infringement, and authorship, apply to all being reconsidered and reshaped as AI’s use and effects on creativity evolve.

At McNicholas & McNicholas, LLP, we stay current with evolving court cases, federal and state laws, and regulations that protect victims from AI abuse, intentional harm, and other covered harms.

Do you have an AI emerging technology harm lawyer near me?

We litigate and represent victims of AI abuse across the country.  Our trial lawyers are available to consult with victims by phone and through online video discussions.  Our Los Angeles office is located in Westwood at 10866 Wilshire Blvd., Suite 1400.  We also have offices in Orange County and Northern California.

We can review your case, explain your rights, and determine whether AI companies may be responsible for your injuries.

Contact our artificial intelligence fraud attorneys today

At McNicholas & McNicholas, LLP, we understand how much AI is changing our world. We understand when victims of AI technology have a legal claim. If you suspect that AI may have played a role in your harm, please contact us to schedule a free consultation.

"*" indicates required fields

The use of the Internet or this form for communication with the firm or any individual member of the firm does not establish an attorney-client relationship. Confidential or time-sensitive information should not be sent through this form.