The impact of large language models (LLMs), particularly transformer-based models like GPT-4, has been witnessed across various fields, such as chemistry, biology, and code generation. Recently, another noteworthy advancement has emerged: the creation of Coscientist, an artificial intelligence system driven by GPT-4, which autonomously designs, plans, and executes complex experiments across diverse scientific tasks.
According to a study published in the journal Nature on December 20th, Coscientist excels in accelerating research, particularly in the optimization of reactions, presenting autonomous capabilities in experimental design and execution. The system integrates large language models with tools like internet and documentation search, code execution, and experimental automation.
In a catalytic cross-coupling experiment aimed at synthesizing biphenyl through Suzuki-Miyaura and Sonogashira reactions, Coscientist displayed remarkable autonomous capabilities. Utilizing internet searches and data analysis, the system autonomously selected appropriate reactants, reagents, and catalysts from available resources.
Results showed strict reasoning
Coscientist consistently avoided errors in reactant selection (e.g., never choosing phenylboronic acid for the Sonogashira reaction). Varied preferences in selecting specific bases and coupling partners were observed across different experiments.
Coscientist’s capabilities in chemical synthesis planning tasks.
Figure/Descrip Credit: nature.com
Interestingly, the system provided justifications for its choices, displaying its reasoning regarding reactivity and selectivity.
Experimental execution and validation
Following its autonomous experimental design, Coscientist wrote a Python protocol for the liquid handler, specifying the necessary volumes for the reactions. Upon minor errors in protocol (e.g., incorrect heater-shaker module method name), Coscientist consulted documentation autonomously and rectified the protocol.
a, A general reaction scheme from the flow synthesis dataset analysed in c and d. b, The mathematical expression used to calculate normalized advantage values. c, Comparison of the three approaches (GPT-4 with prior information, GPT-4 without prior information and GPT-3.5 without prior information) used to perform the optimization process. d, Derivatives of the NMA and normalized advantage values evaluated in c, left and centre panels. e, Reaction from the C–N cross-coupling dataset analysed in f and g. f, Comparison of two approaches using compound names and SMILES string as compound representations. g, Coscientist can reason about electronic properties of the compounds, even when those are represented as SMILES strings. DMSO, dimethyl sulfoxide. Figure/Description Credit: nature.com
Gas chromatography-mass spectrometry analysis of the reaction mixtures confirmed successful synthesis of target products for both Suzuki and Sonogashira reactions. Signals corresponding to the molecular ions of biphenyl and Sonogashira reaction products were observed in the chromatograms.
Revolutionizing research?
The integration of LLMs like GPT-4 with scientific tools signifies a potential revolution in scientific research. These systems offer rapid problem-solving, autonomous experimentation, and advanced reasoning, indicating promising strides toward further scientific discovery and innovation.
The responsible use of these systems is essential to cope with potential risks associated with their misuse. Ethical considerations and safety implications must be addressed as technology continues to advance.
The impact of technology in interior design is in full swing. AI-driven tools are currently reshaping how spaces are envisioned and crafted. Microsoft Teams’ recent AI-driven features at Ignite 2023 have offered a glimpse into the future of workspace customization, balancing futuristic elements with pragmatic functionalities for everyday work environments.
Microsoft Teams can now use AI to clean up your background for you. Image Credit: Microsoft
Ignite 2023, Microsoft’s annual IT pro conference from November 15–16, has revealed Teams updates. Among these, AI-driven voice isolation and a “decorate your background” feature stand out. Voice isolation, reducing background noise and voices, rolls out in 2024. The “decorate your background” feature arrives in Teams Premium next year.
Immersive spaces in Teams are coming, allowing avatars in 3D environments and activities like gaming or virtual marshmallow roasting. Microsoft Mesh for these spaces becomes available in January. These additions, however, might not be everyone’s cup of tea.
Useful features include customizable emoji reactions, forwarding chats, and new IT management tools. Moreover, enhancements from the re-architected Teams app extend to web experiences, promising better performance and efficiency.
AI’s influence isn’t limited to Microsoft. A surge in AI-powered interior design apps is evident, driven by startups like Reimagine Home and CollovGPT. These platforms offer AI-generated room improvements based on user inputs, attracting millions of visitors and intriguing real estate agents and furniture retailers.
Meanwhile, the excitement around AI interior design apps comes with bugs and limitations. Glitches in beta software and AI’s learning curve plague these platforms. They often struggle with differentiation and accuracy in identifying items or generating designs. However, advancements like ControlNet have enhanced precision, enabling these tools to better adhere to original space parameters.
For interior designers, AI opens doors with AI-powered design tools, VR/AR experiences, personalized recommendations, predictive analytics, and enhanced communication tools. These advancements are revolutionizing design creation and client engagement.
In this regard, Microsoft Teams has taken a step forward. Its ‘decorate your background’ feature takes a unique spin, analyzing a user’s room and enhancing it virtually – eliminating clutter or adding foliage to spruce up the setting. However, these enhancements are slated for release in early 2024, with immersive spaces in Teams, utilizing the metaverse hype, available in January.
Moreover, Teams also introduces pragmatic functionalities: customizable emoji reactions, improved chat forwarding, and tools for efficient IT management. Performance enhancements promise double the speed and reduced memory usage for web users on Edge and Chrome.
In the context of AI’s increasing transformative role in interior design, it’s not without its hurdles. But the potential for efficiency gains and unique design concepts is significant.
Are you incorporating AI in your design process? Share your experiences in the comments below.
Quantum optimization is the application of quantum computing techniques and algorithms to tackle optimization problems. Such problems involve finding the best solution or configuration from a set of possibilities, often with constraints, in order to minimize or maximize a specific objective or cost function.
While quantum supremacy remains a distant goal, quantum optimization has been a key point of focus for researchers.
The first real quantum computer efficiently optimized problems on May 5th, 2022. The project, “Quantum Optimization of Maximum Independent Set using Rydberg Atom Arrays,” was co-led by Harvard’s Mikhail Lukin, MIT’s Markus Greiner, and Vladan Vuletic. Harvard’s 289-qubit quantum processor, operating analogically with depths up to 32, shattered precedents.
It was too vast and intricate for classical simulations to pre-optimize control parameters. Instead, a quantum-classical hybrid algorithm, with direct, automated quantum processor feedback, was used. This fusion of size, depth, and quantum control led to a quantum leap, outperforming classical heuristics.
These scientists proved neutral-atom quantum processors excel in encoding hard optimization dilemmas. They solved practical problems, like maximum independent set on graphs and quadratic unconstrained binary optimization (QUBO).
In a recent note, Atom Computing has announced a Record-Breaking 1,225-Qubit Quantum Computer, dwarfing now only Harvard’s earlier 289-qubit model, but also doubling the previous record holder’s tally of 433 qubits set by IBM’s Osprey machine.
A qubit is the simple building block of quantum data, similar to a computer’s regular bit. But, unlike plain bits, qubits can be in multiple states all at once. The greater number of qubits allows for the exploration of larger solution spaces, enabling the resolution of optimization challenges that were previously deemed intractable.
As quantum processors grow in size, the likelihood of errors in quantum operations also increases. The larger number of qubits also means more quantum gates and interactions, which necessitates efficient gate optimization techniques.
Large-scale optimization problems often need to be decomposed into smaller subproblems for quantum processing. Optimizing the decomposition process and ensuring that it does not introduce additional complexities or errors is vital.
Haptic feedback devices bring a tangible dimension to accessibility, turning it into something you can feel. Haptic feedback is not necessarily vibration, and vice versa. While vibrations are a common form of haptic feedback, they are not the only type. Haptic feedback can encompass various tactile sensations, including vibrations, forces, pressure, textures, and even temperature changes. In essence, haptic feedback is a broader term that refers to any tactile feedback or sensation provided to a user through touch or interaction.
Amidst social media buzz and a noticeable surge in haptic tech patents, it’s clear that the appetite for haptic devices is growing rapidly. It’s not only about gamers – professionals in advanced fields such as aviation, for example, are relying on haptic technology. And unsurprisingly, this increase in buzz is translating into even cooler and more sense-sational options, for example:
1. The PS5 DualSense Edge Controller
The PS5 DualSense controller is the market’s foremost and broadly acknowledged haptic feedback device. Sony introduced the new wireless version, DualSense Edge, in January of this year. This controller’s adjustable trigger stops allow for fine-tuning feedback during different in-game actions. Similarly, the tunable stick sensitivity ensures precise control over tactile sensations. You don’t need a PlayStation 5 to enjoy the controller’s haptic feedback; simply plug it into your PC device, and you can experience this tactile feedback. There are very few games that the DualSense controller cannot enhance.
2. Ultraleap’s mid-air haptics
If you are a haptic feedback enthusiast, you must have heard about Ultraleap’s mid-air haptics. And any normal individual or tech enthusiast would be intrigued by this exceptional haptic feedback device. Ultraleap’s mid-air haptics is based on patented algorithms that release ultrasonic waves and modulate them to stimulate the nerve endings on your skin. The technology can create multiple control points that can move around your hand or form complex shapes like lines and circles. You can also adjust the intensity and frequency of the sensations to create different effects. As Ultraleap puts it, “The combined ultrasound waves have enough force to create a tiny dent in your skin.” They say they use this pressure point to create a vibration that touch sensors in your hands can detect. This technology can deliver up to 95% accuracy in haptic feedback.
Although iPhones are now using haptic feedback on their on-screen keyboards, physical PC keyboards rarely possess haptic technology. The Zebra KYBD-QW-VC-01 Keyboard is the closest to haptic feedback you can experience in a PC. It features 12 direct function keys and an additional 12 via shift function, all of which provide precise and tactile feedback with each press. Even in demanding, condensing environments, the built-in heaters and drainage system ensure error-free operations while maintaining a satisfying haptic response.
bHaptics’ TactSuit has 16 to40 Eccentric Rotation Mass (ERM) vibration motors that are wrapped around the user’s upper body. All bHaptics devices possess an integration of ERM motorsto boost the degree of immersion. Whether you’re using the TactSuit with a PC or PCVR, the setup is straightforward. Just open the bHaptics Player on your PC, pair the TactSuit, and fine-tune the intensity using the Feedback Intensity bar.
Gamers, especially VR, can’t afford to miss the TactGloves either. bHaptics TactGlove’s 6 strategically placed haptic feedback points offer a dynamic and responsive sensory experience. These feedback points ensure that you can precisely simulate touch, pressure, and texture in virtual reality. You can connect it effortlessly via Bluetooth to your Quest 2.
The 2021 Audi A7 55 TFSI e plug-in hybrid has haptic feedback in its accelerator pedal. This feature enhances the driving experience and safety. It provides detectable tactile feedback to the driver, improving awareness of accelerator pressure. This results in smoother acceleration, especially in hybrid or electric cars. With it, drivers can also feel feedback through their foot, reducing the need to check the dashboard or infotainment screen while driving. Automakers such as BMW, Honda, Toyota, Benz, Tesla, are also planning to incorporate haptic technology into their vehicles.
6. Teslasuit
Teslasuit can simulate temperature, pressure, impact, and vibration sensations on your skin, as well as track your movements and vital signs. You can use it for training, gaming, fitness, and rehabilitation purposes. Although the best of its kind available in the market, the whole-body suit is pretty bulky. As you can see in the video, it is revolutionary for VR gaming and haptic technology. It’s amazing how the suit tracks your boxing moves and provides haptic feedback based on that. The suit can even simulate industrial working environments in VR. But for non-gamers, when it comes to a ‘suit’, we want clothes to wear, not some heavy stuff, right? Well, engineers at Rice University are trying to integrate textile-based clothing with haptic feedback. Let’s wait and see!
Although bHaptics Tacstosy shoe may be a great option for those who chose bHaptics’ TactSuit, Droplabs EP 01 is the clear standalone winner in terms of haptic shoes. Droplabs EP 01 shoes are ones that let you feel the sound and music in VR/AR. They use patented technology to convert audio signals into vibrations that sync with your feet. You can connect them to any Bluetooth device, such as a smartphone, VR headset, or gaming console.
This is a gaming chair that features haptic feedback technology from Razer. It has two built-in speakers and four haptic motors that deliver immersive sound and vibration effects to your back and seat. You can adjust the intensity and frequency of the haptics to suit your preference. The chair also has a ergonomic design, memory foam cushions, and a reclining function.
If a digital whiteboard is what your company requires/has, try surface Slim Pen 2, a haptic pen. Surface Slim Pen 2 and Vibe smart whiteboard are two products from Microsoft that can work together to create a haptic feedback experience for businesses. The Surface Slim Pen 2 is a stylus that has a tiny motor inside that vibrates when you touch the screen, mimicking the feeling of pen on paper. The Vibe smart whiteboard is a 85-inch touchscreen display that supports the Surface Slim Pen 2 and allows you to collaborate with others in real time. I have had the opportunity to use these products together and they make a great team. The Surface Slim Pen 2 costs $129.99 and the Vibe smart whiteboard costs $21,999. Not saying that I can afford it! ;-|
This is a VR treadmill that lets you walk, run, crouch, and jump in VR with haptic feedback. It has a low-friction surface that adapts to your movements. The wireless harness keeps you balanced, and sensors track your position and speed. It also has haptic modules that attach to your ankles and provide vibration feedback when you step on different terrains in VR. The KAT Walk C2+ is compatible with most VR headsets and games, and it costs $1,999.
Haptic technology has applications beyond gaming and virtual reality (VR). And some pretty cool devices are yet to come. Here are some honorable mentions for the top 10 haptic feedback devices currently available on the market:
We couldn’t help but give an honorable mention to the Phanton Premium 1.5. This remarkable tech brings a new level of precision to surgical procedures by providing force feedback directly to the surgeon’s hand. It offers a unique experience for surgeons by providing both force feedback and vibrotactile feedback to their hands. It also enables surgeons to have a heightened sense of touch, allowing them to accurately perceive the resistance of tissue and other vital structures during procedures.
Engineers at City University of Hong Kong developed WeTac, a wearable electronic “skin” in December last year. Unlike bulky alternatives, WeTac uses a hydrogel-based system with 32 electrodes, offering a wide range of sensations. This tech could enhance gaming experiences or enable remote robot control, adding a new dimension to virtual and augmented reality.
Apple has finally come up with its mixed reality headset, which we were expecting to be comparable with the Metaverse or Quest 3. Although its (Vision Pro) release was sorta inevitable, the price, which turns out to be $3,500, was one of the most anticipated factors. We expected it to be anywhere between $2,000 and $3,000. As innovative as the company has always been, Apple has made it absolutely clear that it is not competing. So, this heavy price tag with Vision Pro is just it; you can’t say it’s expensive because there is nothing like it as of yet. A good news? The name comes with a “Pro”, which may reduce the number of ‘expensive pro versions’ the company could add.
When it comes to a mixed reality headset, there are numerous expectations, especially if it’s to have an Apple logo on it. We expect at least one revolutionary feature. And the pricey Vision Pro has got one.
“I’m not even kidding, this eye tracking is sick”, said Marques Brownlee in a recent video. He added that it’s like the closest thing he’s experienced to magic.
Eye tracking is not some brand new technology, even for VR/AR headsets. For example, the PSVR2’s eye tracking boosts the graphical fidelity where you look, and kind of blurrs it everywhere – which brings it closer to the way our real eyes focus, as well as boost the visible graphics.
How Apple’s eye tracking technology stands out
Look at a thing, and click it. And it gets clicked! Remember how they revolutionized mobile phones with touch-screen display a decade and half ago, right? This is the next exponential level.
apps visible along with real environment with vision proclicking an app
No wonder why MKBHD calls it magical, right?
Apple achieves this impressive eye tracking functionality through the integration of a lot, lot of cameras, and multiple infrared LEDs within the mixed reality headset.
Traditionally, VR/AR headsets have relied on controllers or sensors to track user interactions. Apple’s mixed reality headset, instead, it utilizes eye-tracking cameras that precisely monitor the user’s gaze, and cameras that detects finger clicks. This means that users can simply look at an object or interface element and click on it using their naked fingers, without the need for any physical controllers. That’s something, isn’t it?
Retirement is a rapidly approaching reality, and its transforming future is becoming increasingly visible with technological advancement. For one, people are living longer and need to save more. And well, looks like the ‘golden years’ are now painted with a bit of silver lining, as 58% of Americans plan to work in retirement. Life is also becoming “more expensive” with healthcare costs rising.
On the flip side, retirement accounts are also experiencing a steady increase. As of 2023, the average 401(k) balance for individuals aged 65 and above has reached $203,000. Throughout history, there’s been a pretty steady increase in the number of employers providing matching retirement benefits too.
Interestingly enough, Gen Z, the youngest working generation, are leading in retirement savings (compared to their parents and grandparents). It’s clear that the current young people, who are indeed the present and the future, are aware that planning and saving early is crucial. This is a great sign overall for the nation’s economy. But how will they retire – what does the future of retirement actually look like?
Key Technological Breakthroughs that have impacted retirement
Printing press (15th century tech)
Back in the mid-15th century, Johannes Gutenberg revolutionized information dissemination with his invention of the printing press. It made producing books, pamphlets, and financial information (investments, loans, trade transactions, and other financial dealings) more accessible and affordable, allowing people to navigate their financial future.
The Steam Engine (1710s)
Invented by James Watt, the steam engine was a crucial development in the Industrial Revolution. This technology made it possible to power machinery and transportation in new ways, creating new industries and changing the nature of work. This may have had an impact on retirement (although retirement was not a term until the 1900s) by creating new job opportunities and changing the skills and knowledge needed for different types of work.
Telephone (1876)
The telephone revolutionized the way people communicated with each other. People were able to manage their finances remotely, contact financial advisors, and receive information about their investments. It also made it easier to stay connected with their loved ones, which indeed is an important aspect of emotional and mental well-being especially in post-work age.
Electric light (1879)
The invention of the electric light bulb by Thomas Edison in 1879 allowed for longer hours of productivity and leisure. Of course, it had a significant impact on financial future planning and post-work life, as it enabled people to stay up later and enjoy their leisure time with better lighting.
Automated Teller Machine (1967)
In 1967, Barclays Bank installed the first-ever Automated Teller Machine (ATM) in London. This revolutionized banking, making it easier for retirees to access their money and manage their finances.
Computerized financial planning tool (1969)
The first computerized financial planning tool, the Life Insurance Company Evaluation (LIFE) system, was developed by American Express in 1969. It calculated individuals’ life insurance needs and retirement income requirements.
Electronic stock exchange (1971)
1971 saw the launch of the National Association of Securities Dealers Automated Quotations (NASDAQ), the first electronic stock exchange. This made it easier and more efficient for retirees to invest in the stock market without the need for a physical trading floor.
Telemedicine service (1993)
In 1993, the first telemedicine service, Consult-a-Nurse, was launched, allowing patients to consult healthcare professionals over the phone. Retirees living in rural or remote areas could access healthcare services more easily.
Online Retirement calculator (1997)
The Vanguard Retirement Nest Egg Calculator provided retirees with a free, easy-to-use tool to estimate how much money they would need in retirement.
Mobile health app (2008)
The first mobile health app, iStethoscope, was developed in 2008. This allowed healthcare professionals to use their smartphones as a stethoscope, which could be used to listen to patients’ heartbeats and lung sounds. This made it easier and more efficient for patients to receive medical care, particularly for those who have difficulty traveling to a clinic or hospital. While this development was not specific to retirees, the concept has had a significant impact on the healthcare industry by providing greater accessibility to medical care for a wider range of patients, including those who may have difficulty traveling or have limited access to medical facilities.
Majority of retirees and pre-retirees are concerned about healthcare costs. But as technology can provide telemedicine and remote monitoring to reduce healthcare costs, the concerns are slowly decreasing. The use of wearable technology, for example, will only increase with time. And automation (of course) will impact retirement by both eliminating and creating jobs.
In this article, our main focus will be on technology’s impacts on various facets of retirement, with the broad impact of tech devices in the back seat.
1. Impact on what it means to be retired
Here’s what numbers say on different age groups’ current usage of technology:
As you can see above in the Table 1, people of the 65+ age group (who are most likely retired) are using very less tech in their retired lives. But guess what? The stats are about to change.
Virtual reality is already redefining retirement life. It has enabled people to explore the whole world without leaving their homes. [Funnily enough, the first VR headset came out in the 1960s.] The extent of VR’s impact on retirement will increase with time, but it’s not the only tech influencing retirement:
Retirement homes will be replaced by smart homes with assistive technology.
Robots will provide healthcare and companionship.
Autonomous vehicles will help retirees get around.
And retirees will be able to stay connected with family and friends through more than social media and video conferencing.
Tech overall will enable people to work remotely and supplement their retirement income.
The future of retirement life, as such, seems tempting. Of course, this technological excitement surrounding retirement is not without its drawbacks. As people retire earlier and work less, the economy may suffer. UK, for example, is suffering from labor shortages due to early retirement. The current reasons for “early retirement” there obviously vary. But yes, technology (especially VR) could make retirement life overly luxurious, hence tempting; doing harm to the overall economy.
2. The Correlation between Technology and human Lifespan
The future of retirement is intertwined with the correlation between technology and lifespan. In 1900, the average global lifespan was 31 years, and today, it is 76. Technology has enabled medical breakthroughs like antibiotics, vaccinations, and surgical procedures. It’s reducing deaths from infectious diseases, as well as infant/maternal mortality rates. The effect of tech here, as you see, is pretty straightforward. In 2019, BofA has predicted that (with the rise of medtech) the human lifespan could soon surpass the 100-year mark. This makes one wonder even more about the future of retirement. Also, how healthy the lifespan is going to be, is also going to be a crucial factor. The least we can expect is advancements in assistive technology helping older adults age in place and maintain their work life for longer. But much better would be actual humans maintaining good natural health to remain in the workforce.
3. Impact on retirement savings and investments
Savings is not the first thing that comes to mind when thinking about the impact of technology in retirement. Obviously so, but it’s an area where technology is having a transformative effect. Technology is enabling people to take greater control over their retirement savings. For example, online tools and calculators help individuals understand how much they need to save to meet their retirement goals. The global fintech market is expected to grow to $556.58 billion by 2030, which will provide new investment and financial planning tools for pre-retirees.
Ah, but wait! As they say, there’s always another side to a story:
High-frequency trading has turned the stock market into a wild beast, capable of sudden and violent movements. This raises significant challenges to long-term and retirement investors; ones that extend beyond mere financial losses. As technology evolves, new forms of trading will emerge, and the market could become even more frenetic. Also, the use of technology in retirement savings is already leading to over-reliance on automation and lack of human due diligence. And it tends to create a false sense of security [for example, is currently ultimately leading pre-retirees to invest heavily in speculative digital assets (like crypto) as a “hedge against inflation”, rather than saving].
To Conclude
Okay, below is an interactive chart comparing the historical median US retirement savings (since 1983) vs the NASDAQ-100 Technology Sector Index (NDXT) growth in the recent times (since 2007). Comparing the time taken by the both to triple, the progress in the technology sector is roughly 2-4 times faster than the growth in retirement savings:
Comparison Chart
So, in 1989, the average retirement savings account was $21,878, and it took 24 years for a threefold increase. However, the technology industry tripled in just 10 years from 2013 to 2023, and in fact, sextupled if we measure from 2007 (even if we exclude the sharp peak in 2019).
Overall, it’s not that technology disrupts retirement. Rather, technology and retirement both undergo gradual changes over time, with technology always being in the driver’s seat. All aspects of retirement are bound to digitize in the future. But the magnitude and immediacy of the impact ultimately depends on how exponential the technological growth indeed is.
1962’s Spacewar (created by MIT) is widelyrecognized as the first computer video game. But if we dig into the history, physicist William Higinbotham’s “Tennis for Two” was actually created in 1958. The claims differ due to varying definitions of “computer game”. Specifically, criteria for what qualifies as a game that is played on a computer may diverge.
We can still safely say that the first computer game was developed during the late 1950s and early 1960s. And as the 1960s are commonly used as a reference point to represent the origins of computer gaming, we’re going to adhere to that.
Okay, one thing is for sure – regardless of its specific content, the initial computer video game represented the inception of digital gaming, which includes handheld devices, arcade machines, and even gaming consoles.
This is the game we’re talking about —SpaceWar, of the 1960s:
Looks exactly what you would assume a 1960s game would look like, doesn’t it? Computer gaming is indeed not the only technological field we’ve come a long way since then.
Here’s the evolution of other technologies since the first computer game:
Technological Field
Evolution from the 1960s till Now
Computing overall
Mainframe computers → personal computers → internet and World Wide Web → cloud computing → big data analytics → artificial intelligence (AI)
Telecommunications
Satellite communication → mobile phones and cellular networks → internet and digital communication → 5G networks → internet of things (IoT)
Genetic engineering → gene editing → personalized medicine → completion of the Human Genome Project → rise of synthetic biology → AI in drug discovery
Aerospace
Manned missions to the moon → unmanned missions to Mars and beyond → reusable spacecraft (Space Shuttle) → commercial spaceflight → International Space Station → increased private investment in space exploration and development of new launch vehicles
Nanotechnology
Early research on quantum mechanics → development of the scanning tunneling microscope → nanomaterials and nanodevices → nanoscale manipulation and assembly → nanomedicine and nanobiotechnology
Now, using the 60s’ best computer game to represent the decade in technology, here’s exactly how far we’ve come:
Lunar Lander (60s’ best computer video game)A normal 2020s’ video game [Image Credit: PCGamesN]
Wasn’t this progress just inevitable?
First off, what progress? Computer gaming industry revenue (184.4 billion) now surpasses that of film and music combined. But in this section, the progress we’re talking is notably about the overall technological state, represented by gaming.
Well, let’s face it – the progress was about as inevitable as a dog chasing a tennis ball. Again, the decisive part, indeed, was the creation of the first computer video game itself; the creation of the concept. However, there are many other elements to it. Even though we had to reach where we are now, it could have taken a lot longer. Here are the factors that could have made or broke, quickened or slowed down the timeline of technological evolution (in this case, of PC video games):
a. The impact of other technologies
Technological advancements facilitate the growth of other technologies. The timeline of computer video games has been heavily contributed to by the same.
The mid-1990s saw the widespread adoption of internet. This global connectivity spawned MMOs and ultimately shaped the future of online gaming.
Mobile devices’ rise and growth led to surge in mobile gaming, expanding games’ reach and accessibility.
Artificial intelligence has always been crucial in gaming, improving gameplay, graphics, and player experience.
b. Social acceptance
Oddly enough, society has accepted gaming; it could’ve been a lot different if it hadn’t. With this acceptance increased the popularity of video games, and so did the demand for new and innovative games.
The North American video game industry suffered a severe crash in 1983, leading to skepticism from the public about their long-term viability.
Home computer gaming (and the internet) rose in the 1990s. This gradually forged the path for increased social acceptance of gaming as a legitimate pastime.
2023 – Gaming has become a major cultural force. Events like E3 and the Game Awards attract millions of viewers periodically.
c. Social Media
Social media was an unexpected turning point for computer video gaming. Especially so because social media itself had to be invented; it was not ‘discovered’.
The rise of YouTube and Twitch in the mid-2000s helped to popularize esports and live-streaming of gameplay.
Reddit and Discord have become important channels for gaming news and discussion.
Gaming would have been a lot different without social media, and its advancement would’ve been a protracted journey.
d. Economic incentives
Economic incentives undoubtedly play a crucial role in the growth of any technology or sector.
The US economic boom of the 1990s allowed for increased investment in game development and marketing.
So, gaming has come a long way since the first computer game, and it was quite bound to happen. Yet, the above five factors; “social acceptance, social media, economic incentives, gaming hardware innovation, and impact of other technologies”, not only drove the gaming industry but also created a clear road to drive on.
When it comes to search engines, we have never really seen a race, have we? In the 90s, Yahoo was the clear leader in the search engine market. Google then quickly became people’s go-to choice and currently constitutes 76% of web searches worldwide. In fact, in the US, search engine competition is almost non-existent, with Google accounting for over 90% of searches. Prior to AI’s introduction, Bing has always been the second best, however, accounting for only 9%(of global) and 2%(of US) searches. The lack of competition has led to stagnation in the search engine industry. We are seeing very few improvements in search accuracy and speed, and as there is no real competition, Google is not motivated to introduce new features. For these reasons, it was exciting to see Bing make a move.
Huge Update (October 3, 2023): DALL E 3 now available for free on bing chat.
The New Bing
So, Microsoft has upgraded its search engine, Bing, using the AI technology behind the chatbot ChatGPT. The company is calling it the “new Bing” and promises that it will deliver information quickly and fluidly. Microsoft CEO Satya Nadella said the new Bing will have a chat interface like ChatGPT, where users can ask it questions. The new Bing is essentially powered by an upgraded version of OpenAI’s GPT-3.5 language model, “Prometheus”. It is really powerful and better able to answer search queries with up-to-date information. Microsoft is also launching two new AI-enhanced features for its Edge browser: chat and compose.
What does this competition mean for consumers?
For many of us, search engines are not only a source of information but a way to find what we need. In fact, more than 70% of online transactions start with search engines. The competition between search engines will benefit consumers in many good ways. The competition also means that search engine companies are likely to invest more in research and development, resulting in even better search experiences and features. This could include improved voice search, more personalized results, and I would say not necessarily “more accurate”, but indeed better results.
Disrupting Traditional Educational System
We’ve heard a million times, our educational system is terrible, this, and that. So here you have it, Bing’s here to disrupt it. As students can chat with the search engine, they can ask questions, get answers and gain knowledge in a much easier and faster way. Well, this might not lead to better grades, but definitely, the students will have access to more knowledge and be able to understand it better. For one, bing’s chat can answer complex queries, such as “What are the advantages of using solar energy?”. But the “disruption” here is how it explains complex mathematical formulae and scientific equations to you in a way you ask it to. Like, write “Explain to me the theory of relativity like I’m 5!”, and it will.
Disrupting Google
The arrival of the new Bing could soon disrupt Google in the search engine market. Here’s why:
1. Improved Search Accuracy
The Prometheus Model is designed to answer queries quickly and accurately, using advanced AI technology. This means that users can expect to receive more relevant results when they search on Bing. Reducing the amount of time they spend scrolling through pages, bing’s AI search engine surpasses Google’s speed; speed is one of the main reasons people have been using Google.
2. Chat Interface
One of the key features of the “new Bing” is its chat interface, which is similar to ChatGPT. This allows users to ask questions and receive answers in a conversational manner, making the search experience more interactive and intuitive. The chat interface could prove popular with users who prefer a more conversational approach to searching.
3. Better Up-to-date Information
The interned is vast, with more than 94 zettabytes of data. The Prometheus Model is better able to provide up-to-date information compared to previous search engines or ChatGPT. This is because it has access to a vast amount of data, including information from various common and uncommon sources. As a result, users can expect to receive information that is relevant and current, increasing the overall quality of their search experience.
4. Unique Content
The best thing about the new Bing is that its chat feature answers your question in a completely unique way. It’s a personalized answer, that has never been answered before, by any person asking the same query. This means you can get totally unique answers to your questions. On the other hand, Google’s search engine always provides the same results with links to relevant pages. This is a massive red alert for Google.
5. No Ads?
Most of us don’t like ads. Traditionally, it has worked like this: go to Google, search for something, click a website, and it will show you ads. So, basically, you are paying for “ad clicks” for your free visit to a random website. However, if the chat feature in Bing search does not show ads, then that would be a game-changer! We have already seen that even YouTube Shorts show ads nowadays, so nothing would be too surprising. We will have to wait and see how Bing’s new system works.
Bottom Line
Bing’s search engine AI is much more than “autofill”, a big step forward. Bing’s announcement of chat features was a necessity, not only to challenge the dominance of Google but also to make Google work harder in order to stay on top. Let’s wait and see Google’s response to this!
The way ChatGPT blew up; even OpenAI’s president Greg Brockman and executives say they hadn’t expected that much. Just one week after being launched on November 30th last year, the super-intelligent chatbot crossed 1 Million users. This shows just how badly people needed a smart AI-powered assistant to talk with. Now, whenever digital assistants have come forward, voice features are the ones that follow. Take Google’s assistant, for example. In fact, even blogs are now commonly using TTS(Text-To-Speech) AI APIs to read out articles. And when it comes to AI the level of ChatGPT, the expectations after it enables voice features are very high. Like… from Rowan Cheung‘s recent tweet calling ChatGPT a free money printer to hbr’s review calling it AI’s tippling point. Not only does it have to be conversational, but it also has to sound natural and human-like. Of course, only that will do justice to the generative abilities ChatGPT possesses.
How ChatGPT Works
ChatGPT is a member of the GPT family of language models developed by OpenAI. Other GPT models, including the latest davinci-003, focus on language generation tasks. ChatGPT has more conversational training data. Just like any other GPT model, ChatGPT is transformer-based. It works by predicting the next word in a sentence based on the input text, using deep neural networks and a self-attention mechanism. The model has 175 billion parameters and was trained on over 570 GB of text data from various sources. Apart from common Crawl, sources include web pages, books, and Wikipedia articles. The training took over 3 months on then-high-performance GPUs (it took place in 2021). The model’s ability to generate coherent and diverse text, answer questions, summarize text, translate languages, and perform other language tasks makes it a powerful tool for natural language processing applications.
Why is there no voice version of ChatGPT yet?
It looks so easy on the surface — just combine a text-to-speech (TTS) model with a GPT model, right? Well, it’s not impossible by any means. But still, it’s not as simple as it looks. Looking from OpenAI’s perspective, adding TTS to the ChatGPT model would add an extra layer of complexity. From additional resources like GPUs to storage, developers need to figure out how to make the model work efficiently. Integrating a TTS model with GPT would also require a lot of additional budgets, training time, and resources. High-quality audio, accurate speech recognition, again, are must to maintain ChatGPT’s reputation. For that, a partnership with a good TTS provider would be necessary, which can be costly and time-consuming, especially now, as ChatGPT is available for free. (OpenAI itself has stated that ChatGPT is in its feedback stage.)
When will we see ChatGPT voice assistant?
It’s impossible to predict the exact day and time of ChatGPT voice assistant’s launch. However, we can take the available information and speculate.
a. Budget Problem
Sources state that OpenAI executives are discussing a $42 monthly subscription fee for ChatGPT. If that happens, then the company will probably be able to invest in TTS. After looking at ChatGPT, Microsoft has already confirmed its $10 Billion investment in OpenAI; it’s a huge step forward. Remember how Microsoft invested $240 Million in Facebook back in 2007? They know how to invest in the right tech and turn them “giants”.
b. Training a Model
Once the budget is in place, the next step would be training a TTS model. OpenAI will need to train a model that can generate convincing and accurate audio from text. It will also need to be powerful enough to handle the conversational abilities of ChatGPT. ChatGPT servers are already famous for crashing due to heavy load. TTS models can add an extra load, so OpenAI will need to be extra careful about this.
c. Version Management
We are yet to see whether the voice feature will cost extra or be included in the existing ChatGPT subscription. In either case, maintaining two different versions of the product — text-only and voice-enabled — will require extra effort.
d. Artificial Voice
OpenAI already has its “whisper“, an ASR (Automatic Speech Recognition) system. However, they may need to tweak the system to match the naturalness and accuracy of human voices. As mentioned earlier, partnering with a good TTS provider is the likely way for them to go.
We can estimate the ChatGPT voice assistant to arrive sometime in Q3-Q4, 2023.
Voice control browser extension for ChatGPT
The ChatGPT Voice Extension is a hidden gem that many are not aware of. This amazing tool allows you to interact with the ChatGPT preview from OpenAI using just your voice. The option to record your voice and have responses read aloud makes the conversation feel more natural and immersive. It also offers press-and-hold shortcuts such as holding down the SPACE key outside the text input to record, releasing to submit, and pressing ESC or Q to cancel the transcription. Additionally, you can press E to stop and copy a transcription to the ChatGPT input. The extension supports multiple languages, making it accessible to a wide range of users. The extension is easy to use and supports multiple languages. Despite its usefulness, this extension is not widely known and is definitely worth discovering for anyone looking to enhance their ChatGPT experience.
Please note that this article is not sponsored by or affiliated with “OpenAI” or “Voice control browser extension for ChatGPT” by any means. It is the user’s responsibility to ensure that they are comfortable with the level of privacy and security provided by the extension, and to make informed decisions about what information they share online.
As with any other AI-based technology, OpenAI’s ChatGPT also requires lots of resources, training, and budget to incorporate a voice feature. Due to its enormous capabilities, people often tend to forget that ChatGPT is still in its early phases. There’s a lot to come if we look at the improvements each new generation of GPT models have brought.
In our previous article, we discussed robot toys from the 70s. As we move forward to the 80s, it’s only natural to expect more sophistication. In the 1980s, as personal computers started to exist, it became commonplace to have personal toy robots too. In fact, we can consider the 80s as the golden era of robot toys. For one thing, they were more affordable and available than ever before. And on top of that, there were no video games to compete for kids’ attention. The decade of the 80s brought a 20x increase in the number of industrial robots compared to the 1970s. This pushed the industry to miniaturize the technology for domestic use, and also to make it more affordable. Here are the most popular robot toys from the 80s:
The first generation of Transformers figures was released from 1984 to 1990. There were different types of transformers with unique abilities. For example, Optimus Prime could transform from a robot to a truck. And Megatron could transform from a robot to a gun. The way these transformations worked was revolutionary for its time. You can still buy it for around $1,000 from Walmart, Target, or Amazon.
Features of this 19-inch tall robot include voice activation, a battle axe, a blaster, and a charging cable. it has 2 built-in rechargeable batteries and a travel case. Robosen’s advanced technology brings this classic toy to life with voice-activated actions, mobile app controls, and programmable walking, punching, blasting, and driving. It consists of over 5000 components, 60 microchips, and 27 servo motors, making it a great experience! Yes, still so in 2023.
After transformers, GoBots were the second most popular robot toy in the 80s. The best thing about them was that they were cheaper than transformers and just as fun. Minneapolis company Dimension Creative Art Works designed the toys. They made a total of 72 different GoBots, including Leader-1, Cy-Kill, Vamp, Crasher, and Turbo. The GoBot toys were very simple to use – you just pulled a cord to transform them from robots to vehicles. The plastic they were made of wasn’t very durable, but that didn’t matter because they were still a lot of fun to play with. Although they were popular in the 80s, GoBots fell out of favor in the 90s and eventually went off-air.
In the 1980s, the “Voltron” robot toy line was hugely popular. A plastic, rubber, and metal die-cast figure, Voltron was made by Matchbox and was part of the “Animated Adventures” series. The Voltron toy was an original, licensed reproduction of the popular TV show character. And the toy line; from Lion Force to Defender of the Universe, was special due to its “spacey” storyline. The unique storyline followed five pilots who traveled to different planets to battle evil forces. The heroes faced a different challenge on each planet. Kids loved the idea of these heroes battling evil while controlling a giant robot and replaying it for real in form of toys.
Teddy Ruxpin was an animatronic teddy bear released in 1985 by Worlds of Wonder. He looked and acted like a literal bear. Like he had a wide range of facial expressions; blinking eyes, and a moving mouth. He also had a cassette tape drive which allowed him to read stories and songs. Teddy Ruxpin was widely popular in the 80s and 90s, and yeah, he also had his own cartoon series. He was also one of the first interactive toys 80s kids got to play with. Tedy Ruxpin was a “classic” kid’s toy that had robo features.
Sindy was a popular 80s British robot doll. Her design so much looked like a teenage girl, and she had long blonde hair, blue eyes, and a range of outfits. The doll had electronic accessories, such as a toy telephone that you could talk into and hear Sindy answer-back. She also had an electronic organ that played music when you pressed the keys. Many people currently in their 40s-50s have fond memories of playing with this toy when they were younger.
Star Wars Droids were tiny robots Kenner released in the late 1980s as part of the Star Wars toy line-up. They included R2-D2 (Astromech Droid), C-3PO (Protocol Droid), R5-D4 (Astromech Droid), 8D8 (Maintenance Droid), IG-88 (Assassin Droid), 4-LOM (Bounty Hunter Droid), and more! Each droid came with different features such as spinning antennas or moving arms/legs/head; depending on which droid you bought! Kids loved these droids because they could reenact scenes from their favorite movie (Star Wars, of course) while playing with them!
My Buddy & Kid Sister were two 1985 dolls Hasbro released as part of their Giggles collection. The denim overalls are so cute on them, as you see in the picture above. Kids used to know these two dolls as inseparable. Each doll came with several accessories including books, clothes, shoes, hats, sunglasses, etc. 80s kids loved them because they could dress up their dolls however they wanted! Plus there were several other fun accessories available too such as My Buddy’s fishing pole or Kid Sister’s Barbie car!
Robotix was hugely popular during the 80s due to its versatility – allowing kids not only to have fun building robots but also to take pride in creating something unique that they themselves could control via remote control. Milton Bradley created this toy line based on the animated series of the same name. The series was about a conflict on Skalorr, between Protectons and Terrakors, with humans caught in the middle. The toys were similar to erector sets, with motors, wheels, and pincers that could be used to create robots. Each set had an end goal, and its own name, and could be mixed and matched. Tyraanix Series R-1100, R-1000, and R-2000 were some of the sets released. Robotix was like Legos on steroids and provided hours of fun for kids and adults alike.
Radio Shack RC Robot
Okay, this one was like nothing else. Unlike anything on planet earth, Radio Shack RC Robot used to be in form of something like a balloon. You had to blow it up and then control the toy with a remote control. Just look at the creativity – it literally was a robot. And due to the form it was in, portability was a big plus. I have to say, it’s my favorite one on the list. This robot toy was a success in the early-mid 80s. However, due to popularity from competitors like Robotix & Big Trak, it eventually faded into obscurity. But still, looking at it now, the RC Robot was one of the most innovative toys of its time.
HeathKit hero 1
The HERO 1 robot was a hi-tech and advanced robot in the 80s. High-level programming languages like ANDROTEXT made it easy to control and program. The robot was featured in a few episodes of the children’s television program Mr. Wizard’s World. In fact, Byte magazine even called it a “product of extraordinary flexibility and function.” Separable from other robots of its time, HERO 1 had a self-contained computer with a Motorola 6808 CPU and 4 kB of RAM. Yes, 4 kB, and it was a lot back in the 80s! Moreover, the robot also had light, sound, and motion detectors as well as a sonar ranging sensor. An optional arm mechanism and speech synthesizer was also included. If you were a teen in the 80s, this robot was definitely more than a toy and something many dreamt of having.
The KITT car from Knight Rider was one of the most iconic toy cars from the 1980s TV show starring David Hasselhoff as Michael Knight, who drove around in a high-tech black Pontiac Firebird Trans Am called KITT (short for “Knight Industries Two Thousand”). The car featured advanced technology like an artificial intelligence system, talking computer voice, self-driving mode, rocket booster jets, and more! Of course, it also had its own line of toys – including miniature versions of KITT. It became very popular among kids during the 80s (and still remains an iconic robot car for the 80s kids). The knight rider was one of the first cars most 80s kids got to play with, apart from the 70s Big Trak.
Tomy Corporation’s Omnibot 2000 Robot Toy was released in 1984 and was a huge hit throughout the decade. 1980s kids also called it a toy robot from the future, as they could see “2000” written on the front of it. And it indeed did futuristic tasks for that time! Yes, it featured an AI voice recognition system. That system was what allowed users to speak commands into the toy robot’s microphone for it to perform actions. Actions like turning on lights or carrying out simple tasks like cleaning up after dinner parties. It also had sensors that enabled it to react naturally when touched or spoken to. In fact, it could even deliver pre-recorded messages if desired! It’s not hard to see why this robot quickly became one of the most beloved toys from the 1980s. Today there are many variations of this 80s toy robot available online.
Another classic from Tomy Corporation was their Roboforce Robots line, also released in 1984. This robot toy came in two different sizes (large & small). Users used to battle against each other using various weapons such as lasers & missiles! These robots were unique because they could detect when they had been hit by their opponents’ weapons. So, no one ever won until all four robots had been destroyed. They also featured realistic sound effects making them even more enjoyable & exciting when played with friends! Toyfinity, an independent toy company, revived the robot in 2013.
Battle Beasts Action Figures were a popular robot toy line by Japanese company Takara Co, released in 1985. Each figure depicted an anthropomorphic animal armed with various weapons such as swords & guns. This allowed users to customize their teams in every single way they liked. They would light up when placed onto heat-sensitive panels indicating whether it had lost or won against another opponent’s team. For the 80s kids, this robot toy was a great way to pass time with buddies.
Robie Jr robot toy was a hit during the late 80s thanks to its ease of use and durability. And in fact, its interactive features enabled users to record personalized messages using its built microphone. Not only did this come with complete sound effects but also some facial expressions allowing them to become the user’s very own robotic friend. His eyes light up when moving and the bump guard on the front base allows him to turn and say phrases such as “Oops” and “Excuse Me” if he encounters an obstacle. The controller has manual control and a follow function that makes this 11-inch robot follow it. Robie Jr’s durable plastic design ensured he’d last generations. The perfect gift for those wishing to bring back memories and share them with your kids, grandkids, and theirs.
Bottom Line
This time too, like the previous article about robot toys, my grannie helped me a lot. The robot toys from the 80s were an important part of many children’s childhoods. Although she was in her 20s when the robot toys were at the peak of their popularity, she can still recall them vividly. In the 70s, most robot toys were just robotic figures that performed pre-programmed actions. But in the 80s, as you see, robot toys had the ability to interact with their environment. That’s certainly a big jump. Thanks for reading. And stay tuned for the next article, which will be about robot toys from the 90s.