top of page

AI Diaries: Weekly Updates #6


Welcome to this month's edition of AI Diaries: Weekly Updates! In this issue, we're highlighting some of the most intriguing developments in the AI and tech world.

First up, we delve into a unique political experiment in Cheyenne, Wyoming. Victor Miller, a mayoral candidate, is taking a bold step by incorporating an AI bot named Vic into his governance strategy. Despite facing legal challenges, Miller remains on the ballot, showcasing a pioneering approach that blends AI decision-making with human oversight. This initiative marks a first in U.S. politics, raising important questions about the role of technology in governance.

Next, we explore the viral sensation "Deep-Live-Cam," an open-source tool that creates real-time deepfakes from a single photo. This innovative technology allows users to seamlessly transform their appearance during live video calls, sparking both fascination and concern. While the tool highlights significant advancements in deepfake applications, it also underscores the ethical implications of such powerful technology.

Moving on, the Google Pixel 9 introduces the groundbreaking "Add Me" feature, revolutionizing group photography. This AI-powered tool enables users to include themselves in photos effortlessly by merging two images—one with the group and one with the photographer swapped in. This feature not only enhances user experience but also sets a new standard in smartphone photography.

In a major breakthrough in neurotechnology, UC Davis Health has developed a brain-computer interface (BCI) that allows individuals with ALS to communicate through brain signals with up to 97% accuracy. This advancement in neuroprosthetics offers hope for those with severe speech impairments, transforming their ability to engage with the world.

We also spotlight Neuralink’s latest achievement as Alex, a participant with a spinal injury, demonstrates the potential of brain implants to control digital devices through thought alone. Alex’s use of the Neuralink implant to play video games and explore creative pursuits highlights the transformative potential of this technology for individuals with disabilities.

Additionally, researchers at the University of Delaware are making waves with a nanotechnology-based cancer treatment. Utilizing carbon nanotubes, this approach targets and destroys cancer cells with precision while sparing healthy tissue, representing a significant leap forward in cancer care.

Lastly, we introduce the Astribot S1, a humanoid robot designed for household tasks. From making waffles to feeding pets, the Astribot S1 promises to ease daily chores with its flexible design and dual-digit grippers, marking a step forward in practical household robotics.


These stories offer valuable insights and showcase the remarkable progress being made in technology and AI. Enjoy the read, and we invite you to share your thoughts in the comments below!

Let’s dive in.


Revolutionizing Governance: Can an ChatGPT Lead The City?



TL;DR: Wyoming mayoral candidate Victor Miller proposes a groundbreaking approach by planning to govern Cheyenne with the help of an AI bot named Vic. Despite legal challenges, Miller’s name remains on the ballot, though the bot’s involvement raises questions about AI in politics.


What's the Essence?: Victor Miller is running for mayor of Cheyenne, Wyoming, with a unique plan to incorporate his customized AI bot, Vic (Virtual Integrated Citizen), into governance. Miller asserts that Vic will handle data-driven decision-making while he ensures the legal and practical execution of the bot’s insights. This hybrid governance approach is touted as a first in U.S. political history, blending AI capabilities with human judgment to lead the city.


How Does It Tick?: The AI bot, Vic, is designed to provide data-driven solutions by processing large amounts of information without bias. It will gather public opinions, hold town hall meetings, consult experts, and evaluate human impacts before making decisions. Meanwhile, Miller will act as the official mayor, ensuring that all actions are legally sound. Despite legal hurdles, including an investigation by Wyoming’s secretary of state and OpenAI’s temporary shutdown of Miller’s account, the campaign continues, with Miller’s name appearing on the ballot.

Why Does It Matter?: This campaign highlights the increasing intersection of AI and governance. Miller’s proposal to let an AI bot co-govern Cheyenne raises important questions about the role of technology in politics. While AI can offer data-driven insights, the necessity of human empathy and judgment in decision-making remains crucial. This case also underscores the legal and ethical challenges that arise when integrating AI into public office, setting a precedent for future AI involvement in political campaigns.



---


Deepfakes in Real-Time: The Viral AI Tool That's Transforming Video Calls


Gif: Elon Musk Webcam Deepfake


TL;DR: The new open-source software "Deep-Live-Cam" has gone viral for its ability to create real-time deepfakes using only a single photo. This tool allows users to transform into anyone during a video call, sparking both fascination and concern over its potential misuse.

What's the Essence?: "Deep-Live-Cam" is a free deepfake AI tool that lets users become someone else during a live video call with just one photo. The software can seamlessly overlay a person’s face onto a live webcam feed, capturing the original person's pose, lighting, and facial expressions in real-time. Developed since late 2023, the tool recently gained massive popularity due to viral videos on X (formerly Twitter) demonstrating its capabilities. Despite some minor imperfections, the technology showcases significant advancements in real-time deepfake applications.


How Does It Tick?: The Deep-Live-Cam tool works by using AI to detect faces in both the source image (the photo you want to use) and the target (your live video feed). It then applies a pre-trained model called “inswapper” to swap the faces, while a secondary model called “GFPGAN” refines the image to correct artifacts and enhance the quality. The inswapper model is trained on a vast dataset of millions of facial images, allowing it to predict how a face might appear from different angles and under varying lighting conditions. This advanced AI enables the software to create convincing deepfakes in real-time, although the results may still show some flaws.


Why Does It Matter?: Deep-Live-Cam represents a major leap forward in deepfake technology, making real-time identity transformation more accessible than ever before. While this innovation could be fun and useful for entertainment or creative purposes, it also raises serious concerns about misuse. The tool’s ability to create lifelike deepfakes could be exploited for malicious activities, such as impersonation scams or spreading misinformation. As deepfake technology becomes more sophisticated, the lines between reality and deception blur further, emphasizing the need for vigilance and ethical considerations in the digital age.



---


Picture Perfect: Google Pixel 9's 'Add Me' Feature Changes Group Photos Forever



TL;DR: The Google Pixel 9 introduces the "Add Me" feature, an AI-powered tool that allows the photographer to seamlessly include themselves in group photos. This game-changing feature highlights Google’s continued innovation in smartphone photography, making group shots easier and more inclusive than ever.


What's the Essence?: The Google Pixel 9's "Add Me" feature leverages AI and augmented reality to solve a common photography dilemma: how to include the photographer in group photos without needing a tripod or a stranger's help. By taking two photos—one with the group and one with the photographer swapped in—this feature merges the images into one cohesive shot, placing the photographer naturally within the group. The "Add Me" tool showcases Google’s commitment to enhancing user experience with practical and user-friendly innovations.


How Does It Tick?: Using the "Add Me" feature is straightforward yet powerful. First, take a group photo as usual. Then, switch places with someone in the photo and have them take another picture of the group with you included. The Pixel 9 guides you with positioning tips to ensure consistency between the shots. Once the photos are taken, activate the "Add Me" feature, which uses AI to blend the two images seamlessly. The result is a natural-looking group photo with everyone included, no awkward editing or cropping required. This feature is part of Google’s broader AI strategy, exemplified by the Gemini Live rollout, underscoring their focus on integrating advanced technology into daily life.


Why Does It Matter?: The "Add Me" feature on the Google Pixel 9 is more than just a novelty—it represents a significant leap in smartphone photography, driven by AI. It addresses a common frustration for users who want to be part of the memories they capture, without compromising the quality or convenience of the photo-taking process. As AI continues to evolve, features like "Add Me" demonstrate how technology can simplify and enhance everyday experiences. For Google, this innovation also strengthens their position in the competitive smartphone market, highlighting their ability to deliver meaningful, user-centric improvements. In a world where photos are central to how we connect and remember, the "Add Me" feature ensures no one gets left out of the picture.




---


Speech Restored: The BCI That Lets ALS Patients Communicate Again



TL;DR: A groundbreaking brain-computer interface (BCI) developed by UC Davis Health allows a man with amyotrophic lateral sclerosis (ALS) to 'speak' again by translating brain signals into speech with up to 97% accuracy. This breakthrough could restore communication for people who have lost the ability to speak due to paralysis or neurological conditions.


What's the Essence?: The new BCI technology, created at UC Davis Health, enables people with severe speech impairments, like those caused by ALS, to communicate through brain signals. The system interprets these signals and converts them into text that is then spoken aloud by a computer. In a study, the BCI helped a man with ALS regain his ability to communicate effectively within minutes of activation, achieving an unprecedented 97% accuracy rate. The technology represents a significant advancement in neuroprosthetics, aiming to break down communication barriers for those affected by paralysis.


How Does It Tick?: The BCI system works by detecting the brain's attempts to move muscles and speak, even when the body is unable to perform these actions. Tiny microelectrode arrays are implanted in the brain, recording activity that the system decodes into phonemes and words. The process starts with a small vocabulary and rapidly expands as the system learns, achieving high accuracy in interpreting the user's intended speech. In one case, the system reached 99.6% accuracy with a 50-word vocabulary in just 30 minutes and maintained over 97% accuracy even as the vocabulary increased to 125,000 words. The technology also replicates the user’s natural voice, offering a personalized and transformative experience.


Why Does It Matter?: This BCI technology is a game-changer for individuals who are unable to speak due to conditions like ALS. For those trapped in silence, the ability to communicate again offers hope and a pathway back to active social interaction. The system’s high accuracy and relatively fast training make it a practical tool for restoring speech, giving users the power to engage with their environment and loved ones once more. The success of this technology in clinical trials sets the stage for broader applications, potentially improving the lives of many people with severe disabilities.



---


Gaming with Your Mind: How Neuralink is Redefining Digital Interaction



TL;DR: A participant named Alex is the second person to receive a Neuralink brain implant after suffering a spinal injury that left him unable to control his limbs. Thanks to the implant, he can now play Counter-Strike 2 and explore creative pursuits like 3D design.


What's the Essence?: Alex’s journey with Neuralink showcases the potential of Elon Musk’s brain technology startup to revolutionize how people with quadriplegia interact with the digital world. After undergoing the procedure at the Barrow Neurological Institute in Arizona, Alex can now play video games like Counter-Strike 2 using a combination of a specialized mouth-operated Quadstick joystick and pure thought to aim his weapons. Beyond gaming, he’s also learning to use CAD software, marking a significant leap in how the Link implant can empower its users to perform complex, creative tasks.


How Does It Tick?: The Link implant, developed by Neuralink, connects directly to the brain, enabling users to control digital devices with their minds. To address technical issues from the first implantation, Neuralink made several improvements for Alex’s procedure. These enhancements include reducing the gap between the implant and the brain surface and minimizing brain motion during surgery, ensuring the implant remains stable and fully operational. Alex’s recovery has been smooth, and he’s already demonstrating the vast potential of the Link through gaming and design projects. Neuralink continues to refine its technology, aiming to expand its applications to include full mouse and video game functionality, as well as interactions with the physical world via robotic arms and powered wheelchairs.


Why Does It Matter?: Alex’s successful use of the Neuralink implant is a promising glimpse into the future of brain-computer interfaces. This technology could dramatically improve the quality of life for individuals with disabilities, offering them new ways to engage with the world around them. By enabling tasks that were once impossible, such as playing video games or designing 3D objects, Neuralink is pushing the boundaries of what’s possible in the realm of human augmentation. As the technology advances, it holds the potential to bridge the gap between thought and action, allowing users to regain independence and achieve a sense of normalcy. Alex’s experience is just the beginning, hinting at a future where mind-controlled devices could become a reality for many.


Nanotech vs. Cancer: The Cutting-Edge Therapy Targeting Tumors with Precision



TL;DR: Researchers at the University of Delaware have developed a groundbreaking nanotechnology-based cancer treatment that uses carbon nanotubes to selectively target and destroy cancer cells while sparing healthy tissue. This innovation could revolutionize the way cancer is treated, offering a more precise and less invasive alternative to traditional therapies.


What's the Essence?: A team at the University of Delaware, led by Assistant Professor Balaji Panchapakesan, has harnessed the unique properties of carbon nanotubes to create a highly targeted cancer treatment. By bundling these nanotubes and exposing them to light, they can induce nanoscale explosions that destroy cancer cells without harming surrounding healthy tissue. This approach represents a significant leap forward in precision medicine, offering the potential for more personalized and effective cancer treatments with fewer side effects.


How Does It Tick?: The treatment works by exploiting the thermal properties of carbon nanotubes, which are cylindrical molecules made of carbon atoms. When exposed to specific wavelengths of light, these nanotubes rapidly heat up and explode, obliterating the cancer cells from within. The nanotubes' ability to generate intense, localized heat allows for precise targeting, ensuring that only the cancerous cells are destroyed while healthy cells remain intact. This method also disrupts the cancer cells' biological pathways, preventing them from multiplying and spreading.


Why Does It Matter?: The development of this nanotechnology-based treatment marks a major advancement in precision medicine, an approach that tailors medical care to individual patients based on their genetic makeup. Traditional cancer therapies like chemotherapy and radiation often damage healthy tissue along with cancer cells, leading to debilitating side effects. The University of Delaware’s technology, however, offers a more focused and less invasive alternative, potentially transforming cancer care and improving patient outcomes.



---


Meet Astribot S1: The Humanoid Robot Redefining Home Assistance



TL;DR: The Astribot S1 is a humanoid robot designed to assist with everyday tasks around the house. It can make waffles, feed a cat, and pour tea, all while maneuvering on motorized wheels. Although it lacks legs, its flexible lower body and dual-digit grippers allow it to perform a range of tasks. Pricing and availability are yet to be announced.


What's the Essence?: The Astribot S1 represents a significant leap in household robotics by offering practical assistance with everyday tasks. Unlike many other robots on the market, the Astribot S1 isn’t just a novelty - it’s a functional helper that can whip up breakfast, care for pets, and serve drinks. Its innovative design includes a wheeled base and a lower body that can bend at the middle and base, providing stability and range of motion. The robot’s dual-digit grippers mimic human hands, allowing it to perform a variety of actions with precision. This versatile robot promises to make daily routines easier, though details on its release and price remain under wraps.


How Does It Tick?: The Astribot S1 is built for utility, featuring a lower body with motorized wheels that provide mobility across different surfaces. The robot’s body can bend at two points, offering flexibility that enhances its ability to interact with objects at various heights. The dual-digit grippers function as hands, capable of learning and performing tasks such as cooking, feeding pets, and pouring beverages. One of the key features is its ability to be plugged in while still in motion, preventing interruptions during use. This design choice makes the Astribot S1 particularly practical for continuous, reliable operation in a home environment.


Why Does It Matter?: The introduction of the Astribot S1 marks an important step forward in the integration of robotics into everyday life. While many humanoid robots remain limited to specific or novelty functions, the Astribot S1 is positioned as a genuinely useful household assistant. Its ability to perform multiple tasks with precision could reduce the burden of daily chores, offering a glimpse into the future of home automation. As robots become more capable and adaptable, devices like the Astribot S1 could play a crucial role in enhancing quality of life, particularly for individuals with mobility issues or busy schedules. Though pricing and availability are still unknown, the Astribot S1 has the potential to set a new standard for home robotics.



---


AI Takes the Stage: How Technology is Shaping the Future of Comedy



TL;DR: AI is stepping into the world of comedy, helping comedians like Anesti Danelis craft jokes and write scripts. While the technology provides creative ideas, many believe that human creativity remains irreplaceable in making audiences laugh.


What's the Essence?: Comedians are increasingly using AI tools like ChatGPT to brainstorm ideas and write scripts. Canadian comedian Anesti Danelis, for example, relied on AI to help him create his show “Artificially Intelligent,” which he’s been performing at the Edinburgh Festival Fringe. Although AI-generated content made up about 20% of his show, Danelis discovered that the real magic still lies in the human touch. Audiences appreciate the unique blend of AI and human creativity, but there is caution about over-relying on the technology.


How Does It Tick?: The process of integrating AI into comedy involves asking AI to generate jokes, songs, or even show outlines. Anesti Danelis experimented with ChatGPT to write songs on topics like “bisexual dilemmas” and “being an immigrant child,” and even created a running order for his performance. While AI proved useful for generating ideas, the delivery and emotional nuance required to connect with audiences still heavily depended on Danelis’s own skills. Other comedians, like Viv Ford, use AI to test jokes, finding that AI’s opinions on what’s funny often don’t match audience reactions. Despite its utility, AI’s limitations in capturing deeply human and vulnerable aspects of humor remain clear.


Why Does It Matter?: AI’s involvement in comedy is a double-edged sword. On one hand, it offers a novel way to generate content and can serve as a valuable brainstorming tool for experienced comedians. On the other hand, there is concern that new comedians might become overly reliant on AI, leading to a loss of originality and authenticity in their work. As the stand-up comedy market continues to grow, with ticket sales in the US reaching $900 million in 2023, the debate over AI’s role in the creative arts is becoming more relevant. While AI can assist in the creative process, many comedians, including James Roque, believe that the essence of great comedy lies in its deeply human elements; something AI is not yet capable of replicating.



If you've read this far, you're amazing! 🌟 Keep striving for knowledge and continue learning! 📚✨

Comments


bottom of page