Risks of Large Language Models (LLM)
WatsonXは、IBMが開発したAI技術の1つであり、大規模な自然言語処理モデル(LLM)を活用しています。LLMは、ネイティブスピーカーのような文章を生成することで知られており、その優れた性能から多くの人々を魅了しています。しかし、一方でLLMには独自のリスクも存在します。
Phaedra Boinodiris氏によると、LLMは文章を生成するだけであり、実際の理解力を持っているわけではありません。そのため、ビジネスやブランド、社会に依存する際には潜在的なリスクがあることを認識しておく必要があります。この動画では、Phaedra Boinodiris氏がLLMに頼ることのリスクとそれを軽減するための戦略について解説しています。
WatsonXを活用する際には、その優れた性能だけでなく、潜在的なリスクも理解し、適切な対策を取ることが重要です。記事ではWatsonXの特徴や利点だけでなく、リスクや対策策略についても詳細に取り上げることで読者の関心を引き付けることができます。WatsonXがどのようにビジネスや社会に影響を与えるかに焦点を当てて書くことで、AI技術の魅力や可能性について深く考えさせる記事となるでしょう。
I have realised that making myself better is more lucrative to me than outsourcing everything to AI. And AI doesn’t bother me it’s the business model. First they want to make you dependent and then you have the ultimate subscription model with credits and shit.
Very relevant presentation. Who is the presenter?
I always think how they record videos like this?
good
As for energy I would have been using Google instead of chatgpt. I feel empowered ❤🎉
LLMs often apologise to me
I can save you all money by telling you to download Ollama, then offload LLM’s onto local systems. There’s your 100% lineage overview capability that you usually don’t get with the wider net of training data
very interesting stuff.
Interesting.
*ahem 2:50 Yes, Air Canada, that means YOU. haha
Love the energy! Educate … best way to end this presentation as it is really an invitation to press on an learn more. AI is not going away so we need to learn how to use it properly and responsibly. This is not different then any other major advancement humankind has accomplished in the past.
Loving this series!
This video raises some very valid points my thoughts are that technology will ultimately be empowering when it is open source and decentralized and ultimately authoritarian when it is proprietary and centrally controlled.
We need to revisit the meaning of "Proof"– philosophically, semantically, and in everyday usage. Greater attention needs to be paid to the history of the methods and of the data — the equivalent of a "digital genealogy" but without the "genes." So much of what I see written about AI today reminds me of a quote in Shakespeare's Troilus and Cressida — "And in such indexes, through but small pricks to their subsequent volumes, lies the giant shape of things to come." Finally, the process of recycling data in and out of these systems describes the "Ouroboros." More thought needs to be given to the meanings of the Ouroboros.
Very Nicely explained the risks and mitigations!! It can't be more simpler than this.
So LLM don’t actually understand stuff. They just predict the next likely outcome in a sentence
Garbage video
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model?
Simple answer is no
I would also like to add: AI that is intervening in the user experience in an unwanted and ennoying manner, taking over control of the human user, with pupups of screens that the user did not ask for, adding Apps that the user did not ask for, chaning layout that the user did not ask for… in other words, taking over control of the human user as far as UX is concerned. Mobile Apps that seem innocent can be equipped with AI that start dominating behaviour, habits and life of people…
00:31 Risks of large language models (LLMs) include spreading misinformation and false narratives, potentially harming brands, businesses, individuals, and society.
01:03 Four areas of risk mitigation for LLMs are hallucinations, bias, consent, and security.
01:34 Large language models may generate false narratives or factually incorrect answers due to their ability to predict the next syntactically correct word without true understanding.
03:00 Mitigating the risk of falsehoods involves explainability, providing real data and data lineage to understand the model's reasoning.
03:59 Bias can be present in LLM outputs, and addressing this risk requires cultural awareness, diverse teams, and regular audits.
05:06 Consent-related risks can be mitigated through auditing and accountability, ensuring representative and ethically sourced data.
06:01 Security risks of LLMs include potential misuse for malicious tasks, such as leaking private information or endorsing illegal activities.
07:01 Education is crucial in understanding the strengths, weaknesses, and responsible curation of AI, including the environmental impact and the need for safeguards.
07:32 The relationship with AI should be carefully considered, and education should be accessible and inclusive to ensure responsible use and augmentation of human intelligence.
Glad you added the three dots via Aftereffect. Was a gamechanger.
this is supposed to be purely informative, yet I see politically charged statements being used. Frustrating to see. The point of this is to teach people, people want to learn, not see some bogus poltically charged statement
Are you guys engineers or lawyers? xd
I'm guessing bias is from the people training it? Or is it from the internet? Would a chinese ai have a bias towards white male poets?
If the latter why?
Is there more white male poets that have been published or quoted on the internet than other groups?
I think we have to be careful our own biases don't mean we percieve bias that isn't there.
Pretty much. "Use with care".
Brilliant Explanation!
Useless video. Didn’t provide any meaningful insights for anyone who knows even a little about this technology.
I can't believe no one else has noticed how astoundingly good this lady is at writing backwards.
Very good points but mitigation strategies are not really actionable
Using 'hallucinations' as a term to describe the output of an inanimate system is a calculated distortion of language by the media.
Well done! Remarkable content here thank you
Excellent explanation. However, in terms of bias and audits as a mitigation you did not say who would be doing the audits. The assumption is that it is easy to find unbiased auditors and you immediately run into the problem of "quis custodiet ipsos custodes?" To my mind this is a much greater risk as the potential for misuse and harm is huge.
Means AI is genius but totally dump, similar to what happens when there is no emotion associated to the knowledge/information, totally not good for humans.
Model kog
I'm not clear on how to provide consent/accountability. Is there any existing solution that gets permission from the data sources LLMs scrape? Without any basis in reality it doesn't feel like much of a strategy…
The only question is risk of error and associated liability; if there is no liability, then the risks associated with making poor inferences (for any AI Model) can be ignored. When there is liability, then the question is what mitigations must be implemented in order to under-write insurance for that liability. The hypothesis that an unexplainable (i.e. stochastic) system may be insured is false; we must look to the multi-phase clinical trials process, especially phase IV surveillance, as a mechanism to provide evidence of safety, efficacy, and monitoring of adverse events.
Why is there nothing about the speaaker?
too real,,
I think positive and negative abstractions is a better way to say hallucination in this regard.
Thank you 🙂
Ai is a hologram of the colective human knowledge.
So yeah, they have hallucinations and embedded emotions and biases.
Quick poll. If companies making LLMs we're going to buy IBM mainframe hardware to train them on and run them on in inference mode, how quickly do you think IBM would pull this video down?
IBM stopped being a computer company decades ago. This is a perfect reflection of what IBM has become. It is a great legal and financial company.
6:54 another concerning thing, these people are in charge of this stuff. What if the AI said climate change wasn't a big deal and that plants grow better in a carbon rich environment? They would censor it. The true danger of AI is that large companies would train it to lie based on political bias. We have a very sad future ahead of us.
0:24 I can guarantee it won't be "trustworthy" they are already censoring the AI based on political bias and other bias. We need an actual "open ai"
LLM may have fallacy in some or many areas, but one thing we must understand GPT-x self-improvement is all based on our data. That means we help GPT become stronger and stronger, especially you stating "his" weakness.
Relying solely for accurate info is still a problem. However if you actually converse with them, hallucination and not being accurate is on some level very similar to humans in the first place.
How can I contribute?