Category: Latest

  • MIT Student Invents a Device to Search Google Using Brain

    MIT Student Invents a Device to Search Google Using Brain

    Arnav Kapur’s innovation, AlterEgo, enables individuals to interface with technology using their thoughts. Contrary to merely engaging in internal dialogue, users effectively communicate with technological interfaces.

    Imagine the ability to manipulate machinery or conduct internet searches solely through cognitive processes. See, that’s how powerful your imagination is!

    Speaking of which, did you know that there’s an online platform that allows you to create speaking avatars with your text? For real!

    Getting back to our topic, AlterEgo functions by capturing brain signals triggered by specific cognitive stimuli, such as thoughts or auditory cues. These signals are then relayed to sophisticated computing systems interconnected with the internet, facilitating the interpretation of the user’s mental commands and subsequent information retrieval. Essentially, the device offers a means of conducting online searches through mental processes alone, akin to using a search engine with one’s mind.

    The apparatus comprises a headset equipped with highly sensitive sensors capable of detecting minute brain signals. Despite their subtle nature, these signals mirror the cognitive processes underlying the user’s thoughts, enabling the seamless manipulation of machinery or internet browsing. This feat, analogous to verbal communication, underscores the complexity of interfacing with technology via cognitive pathways, a task made feasible through AlterEgo’s functionality.

    Notably, this interaction remains entirely internalized for the user, akin to private contemplation, without necessitating overt physical actions. Moreover, this process safeguards user privacy and maintains connectivity with the surrounding environment.

    After its debut at TED 2019, AlterEgo garnered some buzz for a while, but it hasn’t quite gotten the recognition it deserves just yet. In fact, this MIT Media Lab video talking about AlterEgo is yet to hit a million views as of March 5, 2024; more than 5 years of upload.

  • What Atom Computing’s Record-breaking 1225-Qubit Quantum Computer Means for Quantum Optimization

    https://www.youtube.com/watch?v=x5BRro-9Ap0

    Quantum optimization is the application of quantum computing techniques and algorithms to tackle optimization problems. Such problems involve finding the best solution or configuration from a set of possibilities, often with constraints, in order to minimize or maximize a specific objective or cost function.

    While quantum supremacy remains a distant goal, quantum optimization has been a key point of focus for researchers.

    The first real quantum computer efficiently optimized problems on May 5th, 2022. The project, “Quantum Optimization of Maximum Independent Set using Rydberg Atom Arrays,” was co-led by Harvard’s Mikhail Lukin, MIT’s Markus Greiner, and Vladan Vuletic. Harvard’s 289-qubit quantum processor, operating analogically with depths up to 32, shattered precedents.

    It was too vast and intricate for classical simulations to pre-optimize control parameters. Instead, a quantum-classical hybrid algorithm, with direct, automated quantum processor feedback, was used. This fusion of size, depth, and quantum control led to a quantum leap, outperforming classical heuristics.

    These scientists proved neutral-atom quantum processors excel in encoding hard optimization dilemmas. They solved practical problems, like maximum independent set on graphs and quadratic unconstrained binary optimization (QUBO).

    In a recent note, Atom Computing has announced a Record-Breaking 1,225-Qubit Quantum Computer, dwarfing now only Harvard’s earlier 289-qubit model, but also doubling the previous record holder’s tally of 433 qubits set by IBM’s Osprey machine.

    A qubit is the simple building block of quantum data, similar to a computer’s regular bit. But, unlike plain bits, qubits can be in multiple states all at once. The greater number of qubits allows for the exploration of larger solution spaces, enabling the resolution of optimization challenges that were previously deemed intractable.

    As quantum processors grow in size, the likelihood of errors in quantum operations also increases. The larger number of qubits also means more quantum gates and interactions, which necessitates efficient gate optimization techniques.

    Large-scale optimization problems often need to be decomposed into smaller subproblems for quantum processing. Optimizing the decomposition process and ensuring that it does not introduce additional complexities or errors is vital.

  • What’s the truth about UFOs? A U.S. defense intelligence report coming soon

    What’s the truth about UFOs? A U.S. defense intelligence report coming soon

    Reports of “flying saucers” in the 1940s and 50s became an American cultural phenomenon. Between 1947 and 1969, when the Cold War was approaching its climax, more than 12,000 UFO sightings were reported to Project Blue Book. Some ridiculed the sightings, and others were convinced that “unidentified flying objects” (UFOs) represented extraterrestrial life. But what is finally the truth about UFOs, which have been flying over our skies for decades? Now, a declassified version of the latest U.S. defense-intelligence report on UFOs, rebranded as “unidentified aerial phenomena” in official government parlance, is expected to be made public in the coming days.

    However, UFO enthusiasts hoping for the government to rule on any of the hundreds of US military sightings under investigation as extraterrestrial spacecraft visits are likely to be disappointed.

    The most recent incidents under investigation are attributed to a combination of foreign surveillance, including relatively routine drone flights, and airborne clutter such as weather balloons, according to The New York Times, citing US officials familiar with a classified analysis due to be delivered to Congress on Monday, Oct. 31.

    Many of an older set of unexplained aerial phenomena, or UAPs, are still officially categorized as unexplained, with too little data analysis to draw any conclusions, the Times said.

    U.S. Defense Department spokesperson Sue Gough said in a statement this week that there is no single explanation that addresses the majority of UAP reports. “We are collecting as much data as we can, following the data where it leads, and will share our findings whenever possible,” Gough said.

    She stated that the US government must take care not to reveal “sensitive information” to foreign adversaries about what American intelligence knows about their surveillance operations and how that information is known.

    It remains to be seen what the UAP report says, if anything, about whether any of the phenomena are of alien origin or if they are the result of foreign adversaries flying highly advanced hypersonic spycraft.

    The Office of the Director of National Intelligence (ODNI), the agency in charge of submitting UAP assessments to Congress, declined to comment on the report’s contents.

    The intelligence office works with a newly formed Pentagon bureau known as AARO, which stands for the cryptically named All Domain Anomaly Resolution Office.

    In June 2021, the first such defense-intelligence UAP report to Congress looked at 144 sightings by US military aviators dating back to 2004, the majority of which were documented with multiple instruments.

    One incident was attributed to a large, deflating balloon; the rest was determined to be beyond the government’s ability to explain without further investigation.

    In May of this year, senior defense intelligence officials testified before Congress that the number of UAPs officially cataloged by the Pentagon’s newly formed task force had risen to 400.

    UFO appears in front of two girls

    At the time, they stated that there was no evidence that any of the sightings were of alien spacecraft, but the majority of the UAP reports remained unresolved.

    Among them was a video released by the Pentagon of enigmatic airborne objects observed by Navy pilots that displayed speed and maneuverability beyond known aviation technology while lacking any visible means of propulsion or flight control surfaces.

    According to Gough, observed phenomena are frequently classified as “unidentified” simply because sensors were unable to gather enough information to make a positive attribution. She also adds that they are working to mitigate future shortfalls and ensure they have enough data for their analysis.

    The latest Pentagon assessment will be released soon after a first-of-its-kind panel organized by NASA launched a separate, parallel study of unclassified UFO sighting data from civilian government and commercial sectors on Oct. 24.

    After the first recorded UFO sighting that dates back to 1639, long before the era of planes and satellites, there are some intriguing UFO mysteries that have yet to be solved. From the 1853 sighting by a group of students and professors on the Tennessee College campus to the oft-reported Stephenville Lights case in 2008, over 200 witnesses saw the UFO, including three police officers who remained anonymous.

    Furthermore, NASA has yet to discover any credible evidence of extraterrestrial life. However, NASA has long been exploring the solar system and beyond to help us answer fundamental questions, such as whether we are alone in the universe, most recently with the powerful James Webb Space Telescope, which has already detected the presence of molecules in the atmosphere of an alien planet, Wasp-32b.

    Back in 1961, astronomer Frank Drake devised an equation to estimate the likelihood of alien life’s existence, taking into account a variety of factors such as the average number of planets capable of supporting life and the fraction that could go on to support intelligent life. This was then put into effect in 2001. As a result, hundreds of thousands of such planets should theoretically exist.

    So, even if we don’t know the truth about whether the UFOs we are talking about exist or not, one thing is for sure: “We are not alone; there are millions of billions of trillions of us around.”

  • Samsung improves AI in-memory processing with CXL

    Using the most recent interconnect standards, Samsung has created in-memory processing to improve the performance of AI systems in data centers.

    The Mi100 AI accelerator from AMD uses HBM-PIM (high bandwidth memory, processing in memory) chips. Then, using 200Gbit/s InfiniBand switches, Samsung created an HBM-PIM Cluster with 96 Mi100 cards and applied it to various new AI and high-performance computing (HPC) applications.

    Tests showed that, on average, the addition of HBM-PIM enhanced performance by more than double and reduced energy usage by more than 50% when compared to traditional GPU accelerators.

    The accuracy of the most recent AI models tends to directly correlate with volume size, which suggests a significant challenge. If the DRAM capacity and bandwidth for data transference are not sufficiently supported for Hyperscale AI models, then computing this quantity of data may experience bottlenecking with current memory solutions.

    It is reported that using a GPU accelerator with HBM-PIM can save 2,100 GWh of energy annually and reduce 960 000 tons of carbon emissions when a large capacity language model proposed by Google is trained on a cluster composed of 8 accelerators.

    Commercially accessible GPUs and HBM-PIM, in combination with software integration, can lessen the bottleneck brought on by memory and bandwidth limitations in hyperscale AI data centers. 

    As a result, customers will be able to use PIM memory solutions in a combined software environment. SYCL was developed in part by Codeplay, which Intel recently bought.

    Samsung has applied PIM technology to AI models based on dense data and PNM technology to AI models based on sparse data.

    Cheolmin Park, head of the New Business Planning Team at Samsung Electronics Memory Business Division, said that HBM-PIM Cluster technology is the industry’s first customized memory solution for large-scale artificial intelligence.

    For more info, click the cited source.

  • Twitter refreshes its design, preparing to roll out new icons “over the coming days”

    Twitter’s design is clean and simple. The interface is easy to use and navigate. The buttons and icons are clearly labeled and the layout is straightforward. The site is also responsive and fast. Overall, Twitter’s design is well done and makes using the site a pleasant experience.

    Now, Twitter is revamping the design of its platform by rolling out brand-new icons throughout its iOS and Android apps, as well as on the web. Announced via its official Twitter Design account today, Twitter said that the goal of the new look is to create a cohesive set of icons that are bold in shape and style.

    twitter to change its icon's design

    The new icon design can be viewed in the tweet here, and Twitter said that it will be rolling out to users “over the coming days”, according to a report.

    Though it is quite similar to the current icon design, the new look has thicker lines and is a bit more angular. Twitter elaborated that it also wants the new look to be relatable and a little cheeky where possible.

    In April, Tesla CEO Elon Musk agreed to buy Twitter for $54.20 per share, or roughly $44 billion. However, after the company’s valuation dropped along with other tech stocks and the overall market, Musk tried to back out of the agreement. Twitter filed a lawsuit to compel Musk to uphold the deal.

    Twitter filed a lawsuit against Musk on July 12 in an effort to force the merger through, alleging Musk had violated his agreement to purchase the company.

    According to a court document issued on Thursday, a Delaware judge has delayed Twitter Inc.’s lawsuit against Elon Musk until 5 p.m. ET (9 p.m. GMT) on October 28 to enable the billionaire’s purchase deal to complete.

  • Human and mouse neurons in a dish learn to play Pong: Are only Humans Intelligent?

    Even brain cells in a dish can exhibit inherent intelligence, modifying their behavior over time. According to a new study published in the journal Neuron, researchers placed human and mouse neurons in a dish and taught them to play the video game Pong.

    The neurons were able to learn and adapt their behavior over time, showing that even simple brain cells have the ability to exhibit intelligent behavior.

    The findings suggest that intelligence is not a uniquely human trait. But it is instead something that is present in all forms of life, even brain cells in a dish.

    First author Brett Kagan, chief scientific officer at Cortical Labs in Melbourne, Australia, from worms to flies to humans, states that neurons are the building blocks for generalized intelligence. So, the question was, as Kagan says, can we interact with neurons in a way to harness that inherent intelligence?

    The neurons were first connected to a computer so that they could receive input on whether their in-game paddle was actually hitting the ball. They had used electric probes that captured “spikes” on a grid to monitor the neuron’s activity as well as its responses to this input.

    The more a neuron moved its paddle and struck the ball, the stronger the spikes became. When neurons miss, a software program created by Cortical Labs analyzes their play style. This showed how neurons might adjust their activity in real time to a changing environment in a goal-oriented way.

    Kagan, who worked with collaborators from 10 other institutions on the project, said, “We chose Pong due to its simplicity and familiarity, but, also, it was one of the first games used in machine learning, so we wanted to recognize that.”

    The researcher also said that they applied an unpredictable stimulus to the cells, and the system as a whole would reorganize its activity to better play the game and to minimize the chances of having a random response. You can also think that just playing the game, as mentioned by Kagan, hitting the ball and getting predictable stimulation, inherently creates more predictable environments.

    The free-energy principle serves as the foundation for this learning theory.

    Simply put, the brain adapts to its environment by changing either its world view or its actions to better fit the world around it.

    The research team tested other games besides Pong. When the Google Chrome browser breaks, you know because you can manipulate the dinosaur to jump over obstacles (Project Bolan). “We have done that and have seen some encouraging early outcomes, but there is still work to be done in creating new environments for specific uses,” Kagan explains.

    This research’s future prospects could lead to the development of new drugs, the modeling of diseases, and a deeper understanding of how the brain functions and how intelligence develops.

    According to Kagan, it touches on the central principles of not only what it is to be human but also what it means to be alive, intelligent, and able to process information in a dynamic, ever-changing environment.

    Despite a number of applications, such as improving the accuracy of medical diagnoses or increasing the efficiency of manufacturing processes, one potential future implication of this finding could be that it could lead to the development of more intelligent artificial intelligence (AI) systems. If brain cells can exhibit intelligent behavior, then AI systems that we literally design to mimic the brain could potentially become much more intelligent.

  • Human-AI Hybrid Doctors: The Present and the Future

    In the past, there have been various debates on the topic of human-AI hybrid doctors. Some believe that AI algorithms and physical human doctors can never be combined together and that they should remain separate entities. However, the time has witnessed that human-AI hybrid doctor is already there, and it is the future of medicine.

    The field of medicine is changing rapidly. New technologies are emerging that are changing the way that doctors diagnose and treat diseases.

    Standing today on the cusp of a new era in medicine, we are using AI in a number of ways to improve healthcare. In the past, we used AI just to diagnose and treat patients. But its potential, as we have already started to observe, is much greater. With the vast amount of data that is now available, we can use AI to predict disease outbreaks, identify new drug targets, and even design personalized treatments.

    For example, the advent of gene editing and CRISPR-Cas9 is revolutionizing the field of medicine. With these new technologies, it is becoming increasingly possible to treat diseases at the genetic level.

    Along with the technological advancement, the capabilities of human-AI hybrid doctor will continue to increase. With time, Doctors, using AI, will be able to diagnose diseases earlier, and treat them more accurately and effectively. They will also be able to provide personalized treatment recommendations based on a patient’s individual genetic makeup. A doctor using a significant amount of AI in treatment/diagnosis is what we can call an AI-human hybrid doctor. Fair enough, isn’t it?

    Here are 6 ways the combo of humans and AI will revolutionize medicine

    With the help of AI, human doctors can become more efficient and accurate in their diagnoses and treatments.

    AI doctor will have higher level of accuracy

    One of the biggest advantages of hybrid doctors is that they are able to achieve a higher level of accuracy than human doctors or AI doctors alone. This is because they are able to draw on the strengths of both AI and human doctors. For example, AI algorithms can quickly and accurately identify patterns that a human doctor may miss. However, AI alone is not perfect and can sometimes make mistakes. A human doctor can provide the critical thinking and experience needed to catch these mistakes.

    Faster decision-making: Another advantage of hybrid

    One of the key advantages of having a human-AI hybrid doctor is that they can make decisions much faster than a human doctor can. This is because the AI can quickly process large amounts of data and identify patterns that a human doctor may miss. This can not only help to speed up diagnosis and treatment but could turn decisive when “impulsive decision-making” is needed in treatment.

    AI can help reduce medical errors

    Medical errors are a major problem in the healthcare industry, and they can often have serious consequences. For example, a nurse was charged for injecting Vecuronium instead of Versed at a 2017 event. A Tennessee nurse was accused of administering Vecuronium, a paralytic anesthetic, to a 75-year-old patient instead of Versed, a sedative.

    AI can help reduce such medical errors by providing doctors with better decision-support

    They can provide earlier detection of diseases

    Another benefit of AI is that it can help doctors to detect diseases earlier. This is because AI algorithms are able to spot patterns that human doctors would otherwise have missed. This early detection can often lead to better outcomes for patients.

    They can reduce the cost of healthcare

    The use of AI can also help to reduce the overall cost of healthcare. This is because AI-assisted care is often more efficient and can help to avoid the need for expensive tests and procedures.

    They can provide more personalized care

    With the help of AI, human doctors are able to gather more data about their patients and tailor treatment plans that are specifically designed for each individual. This is a major improvement over the one-size-fits-all approach that is often taken with traditional medicine.


    In the future, AI will become increasingly important in medicine as we strive to provide better care for our patients. By harnessing the power of AI, we will be able to make faster and more accurate diagnoses, and identify new treatments, and ways to prevent diseases from occurring in the first place. In addition, AI will help us to better understand the complexities of the human body and disease, providing us with invaluable insights that we can use to improve our care.

    So far, AI has shown great promise in a number of medical applications. For example, AI-based diagnostic tools are now being used to detect a variety of diseases, including cancer. AI is also being used to develop new drugs and to personalize treatments for individual patients. AI in the future will continue to play a vital role in helping humans improve healthcare, save lives, and noticeably increase our lifespan.

  • The White House AI Bill of Rights Blueprint: A Good Start!

    The White House AI Bill of Rights Blueprint: A Good Start!

    People have long expressed concerns about the ethical implications of artificial intelligence (AI). They believe that intelligence legislation should be implemented. The idea behind this is to create more guidelines and protocols to use when dealing with AI technologies. As a demand of time, the White House has now made the blueprint for an AI Bill of Rights public.

    What it does is give a comprehensive review of the issues associated with the ethical use of artificial intelligence (AI). It does not, however, offer any guidelines for the executive and legislative branches of our system. Instead of an introduction that is similar to ones that have already been published elsewhere, the White House would rather produce a legislative blueprint.

    The blueprint is very brief and clearly written. An overview of the five guiding principles that the authors believe are essential for societal defenses against the misuse of AI is presented in the first few pages. These are the five guiding principles:

    • Safe and effective systems
    • Algorithmic discrimination protection
    • Data privacy
    • Notice and explanation

    Each of the factors is given a very concise, high-level overview in the introductory sections. The rest of the document has offered more in-depth justifications and examples. But, the Technical Companion seems to be incorrect name for the longer sections. Since they are not technical, folks who are interested in learning more about this problem shouldn’t be put off by reading the expanded information.

    That generality is the issue. For example, in the section about safe and effective systems, a sentence reads, “You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse.” Of course we should. Stakeholders expect the government to provide stronger guidance and introduce laws, although independent organizations such as the Organization for Economic Cooperation and Development (OECD) and trade groups are appropriate for making such vague remarks.

    A few references are made to executive orders, the Privacy Act of 1974 (1974? ), and other laws of a similar nature. What modifications do we need to make to the laws to mirror the widespread use of computing today and the underlying data use? For instance, the paper mentions Executive Order 13960, which “requires that certain federal agencies adhere to nine principles when designing, developing, acquiring, or using AI for purposes other than national security or defense.” Why only “certain federal agencies”?

    As AI is growing abundant in quantity and smarter in quality, the complexity of these issues is really increasing. The author, therefore, offers a request to the executive and legislative branches to create a more specific Artificial Intelligence Bill of Rights.

    The White House is the official executive office of the president of the United States. Officially, it is the oldest continuously operating agency in the federal government and the only office of a president under the direct control of Article Two of the Constitution. The office is based at the White House Complex in Washington, D.C.

    There are already numerous white papers and articles outlining concepts for the ethical use of AI. Our governments – federal, state, and local – must take action. The published blueprint provides a good description of the problem, but federal policy should not be applied in that way. This is a topic that ought to be capable of unifying the increasingly divided political spectrum. The White House has to cooperate more extensively with Congress to develop specific laws that would carry out the plan.

    As every invention, development, or social behavior needs to remain under an appropriate law, the White House should not refrain from taking necessary actions in order to ensure the world that AI won’t be able to cause more harm than good.

    It’s a good start!

    read more

  • AI in Shopping: Google’s New AI Shopping Tools

    Artificial intelligence is being used in the e-commerce and retail industries to transform them through purchasing recommendations, voice-enabled shopping assistants, personalized shopping experiences, robotic warehouse pickers, facial recognition payment methods, anti-counterfeit tools, and other means.

    Online shopping giants like Google and Amazon are turning to AI to assist customers with smarter, faster, and easier browsing thanks to tools driven by AI and machine learning that give more personalized and attractive results, product information, and suggestions.

    It has already become simpler and less dangerous to handle various places thanks to AI-powered mechanics and operations, and this trend of relying on AI is only going to grow in the upcoming years. Some astonishing statistics support the development of AI-powered shopping assistants and e-commerce platforms:

    Opportunities for sales in commerce are aggressively expanding. Ecommerce generated $2.3 trillion in sales in 2017, and by 2022, it is projected to more than double to $5.5 trillion. Online sales now make up 10% of all retail sales in the United States, and they are expected to rise by 15% annually.

    The data on e-commerce shopping patterns is highly illuminating: Online buyers have acknowledged that they make purchases while in bed 43% of the time, at work 23% of the time, and in the bathroom or the car 20% of the time.

    As customers rely more and more on online shopping, which is predicted to account for 95% of all purchases by 2040, e-commerce is providing numerous entrepreneurs with new possibilities.

    Google has further enhanced its shopping capabilities with AI and machine learning-powered tools that provide more personalized and visually appealing results, product information, and suggestions. The improvements mainly aim to give customers more visually appealing and engaging shopping experiences.

    The New Google AI Shopping Tools

    Google’s most recent shopping tools consist of a number of unique features.

    For example, US users can utilize the Google audio search function by saying “shop” followed by the name of a specific product, such as “shop office chairs.” They will then be directed to a shoppable visual stream of products where they may also see the real inventory in stores close by. Currently only available on mobile, the feature will soon be made available on desktop as well.

    When looking for clothing on Google, consumers can use the phrase “shop the look” to order an outfit they like. Google will respond by displaying photos of related fashions along with links to online stores where customers can purchase them.

    Google will add a new category to search called “trending products” that will show off the most popular products that are currently hot.

    After introducing 3D image-based home goods shopping in Google Search earlier this year, the internet company is now introducing 3D shoe shopping, which includes automated 360-degree rotation images. Google intends to expand the tool’s application in the future to cover more different products as machine learning technology develops. Given that consumers interact with 3D graphics over 50% more than with static images, it is very pertinent.

    Likewise, Google is launching a new buying guide that gathers helpful data about a specific product category from several sources all across the web to help customers narrow their options. To assist customers in making the best decisions possible, the buying guide may, for example, provide details about the dimensions and materials of a certain product. As of right now, the feature is only available in the US.

    Next is page insights. This new function allows customers to view what other customers think about a specific product. The tool, which is accessible through the Google app, displays star ratings for a webpage or a product a user is browsing within the same interface.

    Furthermore, users are informed of pricing changes. In the upcoming months, this feature will be made available in the US.

    In addition, Google Search will soon provide customers with more customized shopping results based on their prior purchases and shopping preferences. This will be done using AI and machine learning capabilities. Controls will be provided to users so they can set their preferences or completely turn off the feature. Later this year, the update will be made accessible to US users.

    Another noteworthy aspect is that the search will soon have new dynamic filters. At the top of a results page, users will have options to filter for a variety of criteria, such as styles, price ranges, and whether or not a product is offered at nearby stores. The new filters adjust in real-time based on user behavior. The filters are now available in the US, India, and Japan, and they will soon be available in other markets.

    Moreover, new suggested products will show up in the Google app’s Discover section, expanding the range of personalized shopping tools. Users will see products recommended here based on their past purchasing behavior and the behavior of people who have purchased similar products. Users can use Google Lens to locate a store by simply tapping a picture.

    The company’s Shopping Graph, an AI-based model that shows suggested products designed for specific customers, powers all of Google’s new shopping capabilities. Google claims that the Shopping Graph is more intelligent than ever; it is capable of understanding over 35 billion product listings, a 37% increase from the previous year.

    These are the most recent large shopping changes that Google has released in the previous 12 months. For instance, the company launched multisearch in April, an immersive tool that lets users combine keyword-based search with image-based search to find the exact product.

    Google vs. Amazon Shopping Tools

    With its ongoing investments in commerce technology, Google hopes to better compete with Amazon, which, according to eMarketer statistics, will dominate over 40% of the US e-commerce market by 2022.

    It comes as no surprise that Google is introducing new shopping services, especially because TikTok has been labeled the “new Google” for younger audiences. This holiday season will reveal how open consumers are to new virtual and visual features, regardless of where they shop.

    According to Jennifer Shambroom, chief marketer at Clickatell, a chat-and-SMS-focused commerce company, the commerce landscape has evolved immensely over the past year. We’ve seen social platforms like Instagram and TikTok work to win over consumers through new, interactive shopping capabilities and effectively meet them where they are in their daily lives, says Shambroom.

    The corporate world as a whole is changing directly before our eyes, and the market is quickly catching up with the new trends. How can Google remain an exception? Despite being one of the largest corporations in the world, it cannot remain unchanging. To cut costs and improve business process management, all major corporations today are trying to incorporate AI and ML technology into their business processes.

    With its algorithms, Google functions as a search engine to determine which websites should be ranked. As of Internet Live Stats, 2022, Google processes over 99,000 searches every single second. This results in over 8.5 billion searches per day.

  • DALL-E’s AI Art Generator Finally Opens Doors to a Wider Internet

    DALL-E’s AI Art Generator Finally Opens Doors to a Wider Internet

    Key Points:

    • DALL-E, an AI image generator, is now free and available to everyone.
    • As of now, the AI generates more than 2 million AI-generated graphics daily.
    • As we all know, developers claim they have improved their filters to reject images that imitate sexual, violent, or political content using data and customer input.
    • The Washington Post reports that the software may be used to produce protest photographs.

    Artificial intelligence-created images are already prevalent in online art and image collections. Now that DALL-E, the AI picture generator that probably began the current artificial image obsession, is free and available to everyone, expect to see even more creative images or images of dubious origins.

    Developers of DALL-E OpenAI stated in a blog post on Wednesday that the game already has 1.5 million users and generates more than 2 million AI-generated graphics daily. The company claims they have improved their filters to reject any images intended to imitate sexual, violent, or political content using data and customer input. DALL-E does not currently have an API available, but they are reportedly developing one.

    Although there is now a signup page, the DALL-E section of the OpenAI website still requires users to register for a waitlist as of the time of publication. OpenAI claimed that it used an “iterative deployment approach” to responsibly scale DALL-E, which “has allowed us to find ways they may use it as a powerful creative tool,” according to a statement sent by email.

    New users receive 50 free credits to go toward image creation during the first month after signing up, followed by 15 free credits each subsequent month.

    When OpenAI’s image generator was first announced in April, people rushed to sign up, with some waiting months for their turn. Though DALL-E-named after famed artist Salvador Dali and written akin to Disney Pixar’s WALL-E-was the first system to significantly advance AI image technology, other systems have since caught up, at least in terms of popularity. On its Discord-based platform, Midjourney has hundreds of thousands of members, and StabilityAI, the company behind the AI art generator Stable Diffusion, has been in discussions to raise millions of dollars thanks to its more open-ended, controversial system.

    Reactions Include Criticisms

    Due to both, the increasing popularity of AI arts and the public backlash against them, OpenAI’s announcement finds itself in a very awkward position. The Washington Post spoke with numerous OpenAI product directors while showing how one may use the software to fabricate protest photographs, which would go against the firm’s guidelines for producing political imagery. By activating content warnings on words like “preteen” and “teenager,” the system reduces user prompts. The Post also pointed out that, while the system should prohibit prompts based on famous people, it still allows users to create images of people like Mark Zuckerberg and Elon Musk.

    And the vital question of possession is still open. After entering and winning the top prize in a local art competition with an AI-generated work of art, a tech executive made headlines. The U.S. Copyright Office has stated they do not accept any work that was not created by human hands, thus the question is still up for debate. Last week, an artist claimed she received the first copyright for work created using AI art.

    Of course, controversy has touched all of the most well-known image-makers. Some have attributed the creation of child porn to Stable Diffusion, despite the fact that StabilityAI founder Emad Mostaque stated that they were developing tools to prevent it. Even the heads of the companies Stability AI and OpenAI have argued over which of their solutions is the least controversial.

    OpenAI last week announced that they were removing the constraints that prevented users from uploading actual human faces for the AI model to try and modify. It also claimed to have developed detection technology to prevent people from abusing the platform to produce violent or pornographic content. Users were allegedly banned from posting pictures of people’s faces online without their permission. OpenAI had previously allowed academics interested in building artificial human faces access to its systems.