Hey Guys,
Let’s jump into some of the flashy stories that caught my attention this past week. If you are curious about the stories, simply google the titles.
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023
AGI Debate co-organized by Montreal.AI’s Vince Boucher and Gary Marcus. It’s worth tuning into:
QuickVid uses AI to generate short-form videos, complete with voiceovers
A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram, TikTok and Snapchat videos.
Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2 and adds a synthetic voiceover and background music from YouTube’s royalty-free music library (TechCrunch).
How The ChatGPT Watermark Works And Why It Could Be Defeated
A cryptographic watermark is said to be coming that will make it easy to catch ChatGPT-generated content. It’s unclear how much this will prevent cheating in educational settings or if it will be viable.
OpenAI’s ChatGPT introduced a way to automatically create content but plans to introduce a watermarking feature to make it easy to detect are making some people nervous. Online publishers are afraid of the prospect of AI content flooding the search results, supplanting expert articles written by humans.
Watermarking text in ChatGPT involves cryptography in the form of embedding a pattern of words, letters and punctuation in the form of a secret code. Accordingly, the reason for watermarking is to prevent the misuse of AI in a way that harms humanity. (Search Engine Journal).
You’ll soon be able to talk to Home Assistant without Google, Siri, or Alexa
So it seems open-source local “Assistants” will be a thing. Even as Alexa struggles to monetize and innovate.
Home Assistant, the open-source smart home platform, is getting its own voice assistant. Its founder, Paulus Schoutsen, posted a blog last week announcing a new project that could localize all voice commands that control smart devices — without the need to connect to a cloud that assistants like Alexa, Siri, and Google Assistant have. The voice assistant is targeted to be available sometime in 2023. (Verge)
AI-assisted code can be inherently insecure, study finds
AI systems like GitHub Copilot promise to make programmers' lives easier by creating entire chunks of "new" code based on natural-language textual inputs and pre-existing context. But code-generating algorithms can also bring an insecurity factor to the table, as a new study involving several developers has recently found. (TechSpot)
The researchers said that when the programmers had access to the Codex AI, the resulting code was more likely incorrect or insecure compared to the "hand-made" solutions conceived by the control group. This even as famous personalities in A.I. sing its praises:
It really makes you wonder!
Reflecting on a roller coaster year for robotics
TL;DR We’ve seen a slow creep of robotics in recent years, but this feels different. The obvious import of this is that many automakers are getting aggressive about robots — either through investments or through their own divisions. Hyundai’s Boston Dynamics acquisition was very much in the limelight at last year’s CES2022 show. (TechCrunch)
It’s overly dramatic to suggest that 2022 is the year that robotics came crashing back down to Earth, but there was undeniably a lot of market correction. Yet robots are creeping into the food industry and the robot to worker ratio keeps increasing.
Google Introduces ChatGPT-like ChatBot for Healthcare
MultiMedQA consists of six existing open-question answering datasets along with a new one called HealthSearchQA. Google Research and DeepMind recently introduced MultiMedQA, an open-sourced large language model for medical purposes. It combines HealthSearchQA, a new free-response dataset of medical questions sought online, with six existing open-question answering datasets covering professional medical exams, research, and consumer queries.
These comprise the clinical topics datasets for MedQA, MedMCQA, PubMedQA, LiveQA, MedicationQA, and MMLU. I could not find Deepmind documentation to back up this news. Google Health using LaMDA for chatbots does make a lot of sense though.
LaMDA, which stands for Language Model for Dialogue Applications, is a family of conversational neural language models developed by Google.
A large language model for electronic health records
LLMs for special use cases are coming rapidly in 2023. “Transformers, more than meets the eye.”
There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). (Nature).
Teachers are on alert for inevitable cheating after release of ChatGPT
Generative A.I. may not bring a better world. I’ve seen tons of articles in the last few week about this as it relates to education. ChatGPT offers a glimpse at a future in which computer-generated answers may be undetectable.
Teachers and professors across the education system are in a near-panic as they confront a revolution in artificial intelligence that could allow for cheating on a grand scale. Where will this even lead?
Generative AI: The technology of the year for 2022
(BigThink). When evaluating the most significant innovations of any calendar year, it’s often a struggle to decide among a handful of equally worthy contenders. Not this year. Over the last 12 months, one category of technology has made headlines so often and has impacted society so significantly, there is no question that 2022 will be remembered as the year that Generative AI stunned the world. Does humanity own A.I. or are companies using LLMs copying and violating our creativity? It’s an ethical and legal question of our times.
Keep reading with a 7-day free trial
Subscribe to Artificial Intelligence Survey 🤖🏦🧭 to keep reading this post and get 7 days of free access to the full post archives.