Generative artificial intelligence has become a talk of the town in 2023, capturing the public’s fancy and sparking a rush among Microsoft and Alphabet to launch products with technology they believe will change the nature of work.
What is Generative AI?
Same as other forms of artificial intelligence, generative AI learns how to take actions from past data.
It creates brand new content, a text, an image, even computer code, based on that training, instead of simply categorising or identifying data like other AI.
The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI release in 2022.
The AI powering it is known as a large language model because it takes in a text prompt and from that writes a human-like response.
GPT-4, a newer model that OpenAI announce this week, is “multimodal” because it can perceive not only text but images as well.
OpenAI’s president demonstrate how it could take a photo of a hand-drawn mock-up for a website he want to build, and from that generate a real one.
For What is Generative AI good for?
Demonstrations aside, businesses are already putting generative AI to work.
The technology is helpful for creating a first-draft of marketing copy, for instance, though it may require cleanup because it isn’t perfect.
One example is from CarMax, which has use a version of OpenAI’s technology to summarise thousands of customer reviews and help shoppers decide what use car to buy.
Generative AI likewise can take notes during a virtual meeting.
It can draft and personalize emails, and it can create slide presentations.
Microsoft and Alphabet’s Google each demonstrate these features in product announcements this week.
What is the wrong with Generative AI?
We think Nothing, although there is concern about the technology’s potential abuse.
School systems have fretted about students turning in AI-draft essays, undermining the hard work require for them to learn.
Cybersecurity researchers have also express concern that generative AI could allow bad actors, even governments, to produce far more disinformation than before.
The technology itself is prone to making mistakes.
Factual inaccuracies confidently by AI, call as “hallucinations,” and responses that seem erratic like professing love to a user are all reasons why companies have aim to test the technology before making it widely available.
Google vs Microsoft
Those two companies are at the forefront of research and investment in large language models, as well as the biggest to put generative AI into widely use software such as Gmail and Microsoft Word.
But they are not alone.
Many companies as well as smaller ones are either creating their own competing AI or packaging technology from others to give users new powers through software.
You'll soon be able to chat with Google Search, CEO Sundar Pichai confirms
— 2YoDoINDIA News Network (@2yodoindia) April 8, 2023
For more news visit https://t.co/98KV4yIruC#2YoDoINDIA #Google #SundarPichai #Alphabet #LargeLanguageModels #LLM #ChatGPT #MicrosoftBing #OpenAI pic.twitter.com/iO968iBivn
What about Elon Musk?
Elon Musk was one of the co-founders of OpenAI with Sam Altman.
But Elon Musk left the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and the AI research being done by Telsa, the electric-vehicle maker.
Elon Musk has express concerns about the future of AI and said in favour for a regulatory authority to ensure development of the technology serves public interest.
Elon Musk Said at the end of Tesla Inc’s Investor Day :