Day: August 7, 2022

  • Meta AI’s new “BlenderBot”: The next generation!

    Meta AI said a couple of days ago that it released a new version of its advanced BlenderBot chatbot that is able to remember previous interactions and learn from them.

    In a blog post, Meta AI researchers said the upgraded chatbot can search the internet for information so it can chat about almost any topic while improving its conversational skills through natural conversations and feedback “in the wild.”

    According to Meta AI, BlenderBot 3 is said to be the world’s first 175 billion-parameter, publicly available chatbot that comes complete with model weights, code, datasets, and model cards.

    The original BlenderBot, which Meta AI launched two years ago, had the ability to blend skills such as empathy, knowledge, and personality into a complete AI system. One year after that, Meta AI launched BlenderBot 2, in which researchers added a long-term memory capability that enabled it to hold more engaging and sophisticated conversations on virtually any topic.

    Meta AI’s long-term goal is to build more realistic AI systems that can interact with humans in more intelligent, useful, and safer ways, and to do this it says it must adapt the models that power them to our ever-changing needs.

    The unit claims that BlenderBot 3 delivers superior performance to any chatbot because it’s based on its publicly available OPT-175B language model, which is 58 times larger than the model that powered BlenderBot 2.


    “Most previously publicly available datasets are typically collected through research studies with annotators that can’t reflect the diversity of the real world,” Meta AI explained.

    Read: Time Travel Using AI + VR?

    Through a live, public demo, so far only available in the U.S., BlenderBot 3 can learn from interactions with anyone. The experience it gains from these conversations will enable it to hold longer and more diverse conversations, Meta AI said, and provide more varied feedback. For instance, those who chat with it can provide feedback to each response with a thumbs up or down, specifying what they didn’t like about each negative comment, such as because it was off-topic, rude, spamlike, nonsensical, or something else.

    BlenderBot 3 also takes steps to address the reality that not everyone who is using it will have good intentions. To that end, it incorporates learning algorithms aimed at distinguishing between helpful and harmful feedback.

    “We hope this work will help the wider AI community spur progress in building ever-improving intelligent AI systems that can interact with people in safe and helpful ways,” the researchers said.

    The next generation of BlenderBot

    As the first in the series of BlederBot was just a toy, the second one was a step forward, with long-term memory and vocabulary, which had grown to 200k words. And now, BlenderBot 3 has long-term memory capacity and even the ability to self-learning.

    That means that the next generation of BlenderBot will possibly have cognitive architecture: knowledge of facts about the environment; being able to learn from interactions with people in real-time; models of perception and cognition based on data from external sources; personality and emotions with parameters that can vary depending on circumstances.

    In addition, the next generation of BlenderBot will be able to interact with people based on its ability to generate something more than just a preset answer, such as demonstrating a sense of humor.

    On the other hand, Google’s Brain Team has announced Imagen, a text-to-image AI model that can generate photorealistic images of a scene given a textual description, and DALL·E 2, which is a new AI system that can create realistic images and art from a description in natural language.

    Source here

  • Dawn of creating biohybrid robots in the future: Scientists turn dead spiders into robots

    Dawn of creating biohybrid robots in the future: Scientists turn dead spiders into robots

    Scientists have now become successful to turn dead spiders into robots, signaling the dawn of creating biohybrid robots in the future.

    As reported on July 25 in Advanced Science,  scientists, working in a field known as “necrobotics”, converted wolf spider corpses into manipulative grippers.  The only thing the team needed to do was insert a syringe into the back of a dead spider and superglue it in place. Its legs clench open and shut as researchers pushed fluid into and out of the corpse.

    According to Faye Yap, a mechanical engineer at Rice University in Houston, the idea was born from a simple question: Why do spiders curl up when they die?

    And the answer is spiders are hydraulic machines, which control how much their legs extend by forcing blood into them. As a dead spider no longer has that blood pressure, its legs curled up.

    Yap and her team first tried putting dead wolf spiders in a double boiler, hoping that the wet heat would make the spiders expand and push their legs outward. That initially didn’t work. However, when the researchers injected fluid straight into a spider corpse, they found that they could control its grip well enough to pull wires from a circuit board and pick up other dead spiders. The necrobots started to become dehydrated and show signs of wear only after hundreds of uses.

    The researchers say that they will coat spiders with a sealant to hold off that decline in the future. But, Yap said that the next big step is to control the spiders’ legs individually and in the process, figure out more about how spiders work. After that, her team could translate their understanding into better designs for other robots.

    Recommended: Researchers Built Innovative Nanorobot Entirely from DNA

    Wondering whether it’s okay to play Frankenstein, even with spiders, Yap says, “No one really talks about the ethics when it comes to this sort of research”.

    This research has signaled the possibility of creating a new class of biohybrid robots, which are expected to be able to do work in harsh environments, such as the deep sea. These would have the ability to move through water without a propeller. For example, Virginia Tech College of Engineering researchers 2013 unveiled a life-like, autonomous robotic jellyfish the size and weight of a grown man, 5 foot 7 inches in length and weighing 170 pounds.

    This study is an example of how engineering and biology can be combined for the creation of new technological tools and applications.

    Source here