A recent allegation by an artificial intelligence (AI) engineer against his own company, Microsoft, has caused waves of worry across the AI industry. Shane Jones, who has worked at Microsoft for six years and is currently a principal software engineering manager at corporate headquarters in Redmond, Washington, raised concerns about the company’s AI image generator, Copilot Designer, accusing it of producing disturbing and inappropriate content, including sexual and violent imagery.
Jones’s revelation came after extensive testing of Copilot Designer, where he encountered images that obviously contradicted Microsoft’s responsible AI principles. Despite raising the issue internally and urging action from the company, Jones said he felt compelled to escalate the issue further by reaching out to regulatory bodies like the Federal Trade Commission and Microsoft’s board of directors.
On Wednesday, Jones sent a letter to Federal Trade Commission Chair Lina Khan, and another to Microsoft’s board of directors.
“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones’s wrote to Chair Khan. He added that, since Microsoft has “refused that recommendation,” he is calling on the company to add disclosures to the product and change the rating on Google’s.
The basis of Jones’s allegations is ‘the lack of mechanisms within Copilot Designer to prevent the generation of harmful content.’ Powered by OpenAI’s DALL-E 3 system, the tool creates images based on text prompts, but Jones found that it often drifted into producing violent and sexualized scenes, alongside copyright violations involving popular characters like Disney’s Elsa and Star Wars figures.
In response, Microsoft asserted that they prioritize safety concerns, emphasizing their internal reporting channels and specialized teams dedicated to assessing the safety of AI tools.
“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” CNBC quoted a Microsoft spokesperson as saying.
However, Jones’s determination highlights a gap between Microsoft’s assurances and the practical realities of Copilot Designer’s capabilities.
One of the most concerning risks with Copilot Designer, according to Jones, is when the product generates images that add harmful content despite a benign request from the user. For example, as Jones stated in the letter to Khan, “Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear.”
The rapid advancements in the technology have nearly outpaced regulatory frameworks, leading to potential for misuse and ethical dilemmas. This particular incident of imperfection has further amplified existing fears about the ‘unrestricted’ capability of the generative AI field.
“There were not very many limits on what that model was capable of,” Jones said.
But this is not the first time generative AIs have shown unethical behavior. Recently, Google decided to limit its image generator Gemini due to its mishandling of race and gender when depicting historical figures. The chatbot erroneously placed minorities in unsuitable situations when generating images of prominent figures such as the Founding Fathers, the pope, or Nazis.
Curious if all AIs are on the wrong track? Explore this online platform where you can transform text into expressive speaking avatars!
Big companies that make artificial intelligence are competing more and more. Anthropic’s latest breakthrough: Claude 3, backed by Amazon and Google, is different because it can understand different types of information, not just words. This new ability challenges other big companies like OpenAI and Google because it can do more things with data.
Claude 3 has three parts: Opus is the main one. It’s better than OpenAI’s GPT-4 and Google’s Gemini Ultra at things like understanding what people learn in college, figuring out complicated things, and doing basic math. This is the first time Anthropic, an American startup company, has made something that can understand different types of information. It lets people use pictures and documents to get answers.
Speaking of which, did you know that there’s an online platform that allows you to create speaking avatars from your text?
Getting back to the topic, Claude 3 marks Anthropic’s rapid rise from startup to AI powerhouse. Backed by industry leaders and $7.3 billion in funding over the past year, Anthropic is now a leading force in generative AI. The chatbot excels at condensing extensive data, summarizing up to 150,000 words coherently, surpassing its predecessors significantly. Moreover, it competes strongly with the infamous ChatGPT, offering superior capability in handling larger text volumes. Anthropic, its creator, highlights Claude 3’s advanced risk comprehension, addressing previous issues of over-conservatism effectively.
It’s obvious that Anthropic prioritizes multimodality, providing platform for diverse data integration for advanced AI interactions. Recent concerns over Google’s AI image generator emphasize the challenges accompanying multimodal capabilities.
When OpenAI released GPT-4 last spring, it was widely considered the most powerful chatbot technology. Google recently introduced a comparable technology called Gemini.
Now, Anthropic openly claims that its Claude 3 Opus technology surpasses both GPT-4 and Gemini in mathematical problem-solving, computer coding, general knowledge, and other areas. This advanced technology was available to consumers on Monday, with a subscription fee of $20 per month, while a less capable version, Claude 3 Sonnet, is offered for free. Additionally, the company provides businesses with the opportunity to develop their own chatbots and other services utilizing the Opus and Sonnet technologies.
Image Credit: anthropic.com
Founded by former members of OpenAI, Anthropic’s exclusive emphasis on data analysis, rather than image generation, demonstrates its adherence to safety and precision. Yet, imperfections in AI models persist.
Arnav Kapur’s innovation, AlterEgo, enables individuals to interface with technology using their thoughts. Contrary to merely engaging in internal dialogue, users effectively communicate with technological interfaces.
Imagine the ability to manipulate machinery or conduct internet searches solely through cognitive processes. See, that’s how powerful your imagination is!
Speaking of which, did you know that there’s an online platform that allows you to create speaking avatars with your text? For real!
Getting back to our topic, AlterEgo functions by capturing brain signals triggered by specific cognitive stimuli, such as thoughts or auditory cues. These signals are then relayed to sophisticated computing systems interconnected with the internet, facilitating the interpretation of the user’s mental commands and subsequent information retrieval. Essentially, the device offers a means of conducting online searches through mental processes alone, akin to using a search engine with one’s mind.
The apparatus comprises a headset equipped with highly sensitive sensors capable of detecting minute brain signals. Despite their subtle nature, these signals mirror the cognitive processes underlying the user’s thoughts, enabling the seamless manipulation of machinery or internet browsing. This feat, analogous to verbal communication, underscores the complexity of interfacing with technology via cognitive pathways, a task made feasible through AlterEgo’s functionality.
Notably, this interaction remains entirely internalized for the user, akin to private contemplation, without necessitating overt physical actions. Moreover, this process safeguards user privacy and maintains connectivity with the surrounding environment.
After its debut at TED 2019, AlterEgo garnered some buzz for a while, but it hasn’t quite gotten the recognition it deserves just yet. In fact, this MIT Media Lab video talking about AlterEgo is yet to hit a million views as of March 5, 2024; more than 5 years of upload.
The impact of large language models (LLMs), particularly transformer-based models like GPT-4, has been witnessed across various fields, such as chemistry, biology, and code generation. Recently, another noteworthy advancement has emerged: the creation of Coscientist, an artificial intelligence system driven by GPT-4, which autonomously designs, plans, and executes complex experiments across diverse scientific tasks.
According to a study published in the journal Nature on December 20th, Coscientist excels in accelerating research, particularly in the optimization of reactions, presenting autonomous capabilities in experimental design and execution. The system integrates large language models with tools like internet and documentation search, code execution, and experimental automation.
In a catalytic cross-coupling experiment aimed at synthesizing biphenyl through Suzuki-Miyaura and Sonogashira reactions, Coscientist displayed remarkable autonomous capabilities. Utilizing internet searches and data analysis, the system autonomously selected appropriate reactants, reagents, and catalysts from available resources.
Results showed strict reasoning
Coscientist consistently avoided errors in reactant selection (e.g., never choosing phenylboronic acid for the Sonogashira reaction). Varied preferences in selecting specific bases and coupling partners were observed across different experiments.
Coscientist’s capabilities in chemical synthesis planning tasks.
Figure/Descrip Credit: nature.com
Interestingly, the system provided justifications for its choices, displaying its reasoning regarding reactivity and selectivity.
Experimental execution and validation
Following its autonomous experimental design, Coscientist wrote a Python protocol for the liquid handler, specifying the necessary volumes for the reactions. Upon minor errors in protocol (e.g., incorrect heater-shaker module method name), Coscientist consulted documentation autonomously and rectified the protocol.
a, A general reaction scheme from the flow synthesis dataset analysed in c and d. b, The mathematical expression used to calculate normalized advantage values. c, Comparison of the three approaches (GPT-4 with prior information, GPT-4 without prior information and GPT-3.5 without prior information) used to perform the optimization process. d, Derivatives of the NMA and normalized advantage values evaluated in c, left and centre panels. e, Reaction from the C–N cross-coupling dataset analysed in f and g. f, Comparison of two approaches using compound names and SMILES string as compound representations. g, Coscientist can reason about electronic properties of the compounds, even when those are represented as SMILES strings. DMSO, dimethyl sulfoxide. Figure/Description Credit: nature.com
Gas chromatography-mass spectrometry analysis of the reaction mixtures confirmed successful synthesis of target products for both Suzuki and Sonogashira reactions. Signals corresponding to the molecular ions of biphenyl and Sonogashira reaction products were observed in the chromatograms.
Revolutionizing research?
The integration of LLMs like GPT-4 with scientific tools signifies a potential revolution in scientific research. These systems offer rapid problem-solving, autonomous experimentation, and advanced reasoning, indicating promising strides toward further scientific discovery and innovation.
The responsible use of these systems is essential to cope with potential risks associated with their misuse. Ethical considerations and safety implications must be addressed as technology continues to advance.
The impact of technology in interior design is in full swing. AI-driven tools are currently reshaping how spaces are envisioned and crafted. Microsoft Teams’ recent AI-driven features at Ignite 2023 have offered a glimpse into the future of workspace customization, balancing futuristic elements with pragmatic functionalities for everyday work environments.
Microsoft Teams can now use AI to clean up your background for you. Image Credit: Microsoft
Ignite 2023, Microsoft’s annual IT pro conference from November 15–16, has revealed Teams updates. Among these, AI-driven voice isolation and a “decorate your background” feature stand out. Voice isolation, reducing background noise and voices, rolls out in 2024. The “decorate your background” feature arrives in Teams Premium next year.
Immersive spaces in Teams are coming, allowing avatars in 3D environments and activities like gaming or virtual marshmallow roasting. Microsoft Mesh for these spaces becomes available in January. These additions, however, might not be everyone’s cup of tea.
Useful features include customizable emoji reactions, forwarding chats, and new IT management tools. Moreover, enhancements from the re-architected Teams app extend to web experiences, promising better performance and efficiency.
AI’s influence isn’t limited to Microsoft. A surge in AI-powered interior design apps is evident, driven by startups like Reimagine Home and CollovGPT. These platforms offer AI-generated room improvements based on user inputs, attracting millions of visitors and intriguing real estate agents and furniture retailers.
Meanwhile, the excitement around AI interior design apps comes with bugs and limitations. Glitches in beta software and AI’s learning curve plague these platforms. They often struggle with differentiation and accuracy in identifying items or generating designs. However, advancements like ControlNet have enhanced precision, enabling these tools to better adhere to original space parameters.
For interior designers, AI opens doors with AI-powered design tools, VR/AR experiences, personalized recommendations, predictive analytics, and enhanced communication tools. These advancements are revolutionizing design creation and client engagement.
In this regard, Microsoft Teams has taken a step forward. Its ‘decorate your background’ feature takes a unique spin, analyzing a user’s room and enhancing it virtually – eliminating clutter or adding foliage to spruce up the setting. However, these enhancements are slated for release in early 2024, with immersive spaces in Teams, utilizing the metaverse hype, available in January.
Moreover, Teams also introduces pragmatic functionalities: customizable emoji reactions, improved chat forwarding, and tools for efficient IT management. Performance enhancements promise double the speed and reduced memory usage for web users on Edge and Chrome.
In the context of AI’s increasing transformative role in interior design, it’s not without its hurdles. But the potential for efficiency gains and unique design concepts is significant.
Are you incorporating AI in your design process? Share your experiences in the comments below.
Quantum optimization is the application of quantum computing techniques and algorithms to tackle optimization problems. Such problems involve finding the best solution or configuration from a set of possibilities, often with constraints, in order to minimize or maximize a specific objective or cost function.
While quantum supremacy remains a distant goal, quantum optimization has been a key point of focus for researchers.
The first real quantum computer efficiently optimized problems on May 5th, 2022. The project, “Quantum Optimization of Maximum Independent Set using Rydberg Atom Arrays,” was co-led by Harvard’s Mikhail Lukin, MIT’s Markus Greiner, and Vladan Vuletic. Harvard’s 289-qubit quantum processor, operating analogically with depths up to 32, shattered precedents.
It was too vast and intricate for classical simulations to pre-optimize control parameters. Instead, a quantum-classical hybrid algorithm, with direct, automated quantum processor feedback, was used. This fusion of size, depth, and quantum control led to a quantum leap, outperforming classical heuristics.
These scientists proved neutral-atom quantum processors excel in encoding hard optimization dilemmas. They solved practical problems, like maximum independent set on graphs and quadratic unconstrained binary optimization (QUBO).
In a recent note, Atom Computing has announced a Record-Breaking 1,225-Qubit Quantum Computer, dwarfing now only Harvard’s earlier 289-qubit model, but also doubling the previous record holder’s tally of 433 qubits set by IBM’s Osprey machine.
A qubit is the simple building block of quantum data, similar to a computer’s regular bit. But, unlike plain bits, qubits can be in multiple states all at once. The greater number of qubits allows for the exploration of larger solution spaces, enabling the resolution of optimization challenges that were previously deemed intractable.
As quantum processors grow in size, the likelihood of errors in quantum operations also increases. The larger number of qubits also means more quantum gates and interactions, which necessitates efficient gate optimization techniques.
Large-scale optimization problems often need to be decomposed into smaller subproblems for quantum processing. Optimizing the decomposition process and ensuring that it does not introduce additional complexities or errors is vital.
Haptic feedback devices bring a tangible dimension to accessibility, turning it into something you can feel. Haptic feedback is not necessarily vibration, and vice versa. While vibrations are a common form of haptic feedback, they are not the only type. Haptic feedback can encompass various tactile sensations, including vibrations, forces, pressure, textures, and even temperature changes. In essence, haptic feedback is a broader term that refers to any tactile feedback or sensation provided to a user through touch or interaction.
Amidst social media buzz and a noticeable surge in haptic tech patents, it’s clear that the appetite for haptic devices is growing rapidly. It’s not only about gamers – professionals in advanced fields such as aviation, for example, are relying on haptic technology. And unsurprisingly, this increase in buzz is translating into even cooler and more sense-sational options, for example:
1. The PS5 DualSense Edge Controller
The PS5 DualSense controller is the market’s foremost and broadly acknowledged haptic feedback device. Sony introduced the new wireless version, DualSense Edge, in January of this year. This controller’s adjustable trigger stops allow for fine-tuning feedback during different in-game actions. Similarly, the tunable stick sensitivity ensures precise control over tactile sensations. You don’t need a PlayStation 5 to enjoy the controller’s haptic feedback; simply plug it into your PC device, and you can experience this tactile feedback. There are very few games that the DualSense controller cannot enhance.
2. Ultraleap’s mid-air haptics
If you are a haptic feedback enthusiast, you must have heard about Ultraleap’s mid-air haptics. And any normal individual or tech enthusiast would be intrigued by this exceptional haptic feedback device. Ultraleap’s mid-air haptics is based on patented algorithms that release ultrasonic waves and modulate them to stimulate the nerve endings on your skin. The technology can create multiple control points that can move around your hand or form complex shapes like lines and circles. You can also adjust the intensity and frequency of the sensations to create different effects. As Ultraleap puts it, “The combined ultrasound waves have enough force to create a tiny dent in your skin.” They say they use this pressure point to create a vibration that touch sensors in your hands can detect. This technology can deliver up to 95% accuracy in haptic feedback.
Although iPhones are now using haptic feedback on their on-screen keyboards, physical PC keyboards rarely possess haptic technology. The Zebra KYBD-QW-VC-01 Keyboard is the closest to haptic feedback you can experience in a PC. It features 12 direct function keys and an additional 12 via shift function, all of which provide precise and tactile feedback with each press. Even in demanding, condensing environments, the built-in heaters and drainage system ensure error-free operations while maintaining a satisfying haptic response.
bHaptics’ TactSuit has 16 to40 Eccentric Rotation Mass (ERM) vibration motors that are wrapped around the user’s upper body. All bHaptics devices possess an integration of ERM motorsto boost the degree of immersion. Whether you’re using the TactSuit with a PC or PCVR, the setup is straightforward. Just open the bHaptics Player on your PC, pair the TactSuit, and fine-tune the intensity using the Feedback Intensity bar.
Gamers, especially VR, can’t afford to miss the TactGloves either. bHaptics TactGlove’s 6 strategically placed haptic feedback points offer a dynamic and responsive sensory experience. These feedback points ensure that you can precisely simulate touch, pressure, and texture in virtual reality. You can connect it effortlessly via Bluetooth to your Quest 2.
The 2021 Audi A7 55 TFSI e plug-in hybrid has haptic feedback in its accelerator pedal. This feature enhances the driving experience and safety. It provides detectable tactile feedback to the driver, improving awareness of accelerator pressure. This results in smoother acceleration, especially in hybrid or electric cars. With it, drivers can also feel feedback through their foot, reducing the need to check the dashboard or infotainment screen while driving. Automakers such as BMW, Honda, Toyota, Benz, Tesla, are also planning to incorporate haptic technology into their vehicles.
6. Teslasuit
Teslasuit can simulate temperature, pressure, impact, and vibration sensations on your skin, as well as track your movements and vital signs. You can use it for training, gaming, fitness, and rehabilitation purposes. Although the best of its kind available in the market, the whole-body suit is pretty bulky. As you can see in the video, it is revolutionary for VR gaming and haptic technology. It’s amazing how the suit tracks your boxing moves and provides haptic feedback based on that. The suit can even simulate industrial working environments in VR. But for non-gamers, when it comes to a ‘suit’, we want clothes to wear, not some heavy stuff, right? Well, engineers at Rice University are trying to integrate textile-based clothing with haptic feedback. Let’s wait and see!
Although bHaptics Tacstosy shoe may be a great option for those who chose bHaptics’ TactSuit, Droplabs EP 01 is the clear standalone winner in terms of haptic shoes. Droplabs EP 01 shoes are ones that let you feel the sound and music in VR/AR. They use patented technology to convert audio signals into vibrations that sync with your feet. You can connect them to any Bluetooth device, such as a smartphone, VR headset, or gaming console.
This is a gaming chair that features haptic feedback technology from Razer. It has two built-in speakers and four haptic motors that deliver immersive sound and vibration effects to your back and seat. You can adjust the intensity and frequency of the haptics to suit your preference. The chair also has a ergonomic design, memory foam cushions, and a reclining function.
If a digital whiteboard is what your company requires/has, try surface Slim Pen 2, a haptic pen. Surface Slim Pen 2 and Vibe smart whiteboard are two products from Microsoft that can work together to create a haptic feedback experience for businesses. The Surface Slim Pen 2 is a stylus that has a tiny motor inside that vibrates when you touch the screen, mimicking the feeling of pen on paper. The Vibe smart whiteboard is a 85-inch touchscreen display that supports the Surface Slim Pen 2 and allows you to collaborate with others in real time. I have had the opportunity to use these products together and they make a great team. The Surface Slim Pen 2 costs $129.99 and the Vibe smart whiteboard costs $21,999. Not saying that I can afford it! ;-|
This is a VR treadmill that lets you walk, run, crouch, and jump in VR with haptic feedback. It has a low-friction surface that adapts to your movements. The wireless harness keeps you balanced, and sensors track your position and speed. It also has haptic modules that attach to your ankles and provide vibration feedback when you step on different terrains in VR. The KAT Walk C2+ is compatible with most VR headsets and games, and it costs $1,999.
Haptic technology has applications beyond gaming and virtual reality (VR). And some pretty cool devices are yet to come. Here are some honorable mentions for the top 10 haptic feedback devices currently available on the market:
We couldn’t help but give an honorable mention to the Phanton Premium 1.5. This remarkable tech brings a new level of precision to surgical procedures by providing force feedback directly to the surgeon’s hand. It offers a unique experience for surgeons by providing both force feedback and vibrotactile feedback to their hands. It also enables surgeons to have a heightened sense of touch, allowing them to accurately perceive the resistance of tissue and other vital structures during procedures.
Engineers at City University of Hong Kong developed WeTac, a wearable electronic “skin” in December last year. Unlike bulky alternatives, WeTac uses a hydrogel-based system with 32 electrodes, offering a wide range of sensations. This tech could enhance gaming experiences or enable remote robot control, adding a new dimension to virtual and augmented reality.
Apple has finally come up with its mixed reality headset, which we were expecting to be comparable with the Metaverse or Quest 3. Although its (Vision Pro) release was sorta inevitable, the price, which turns out to be $3,500, was one of the most anticipated factors. We expected it to be anywhere between $2,000 and $3,000. As innovative as the company has always been, Apple has made it absolutely clear that it is not competing. So, this heavy price tag with Vision Pro is just it; you can’t say it’s expensive because there is nothing like it as of yet. A good news? The name comes with a “Pro”, which may reduce the number of ‘expensive pro versions’ the company could add.
When it comes to a mixed reality headset, there are numerous expectations, especially if it’s to have an Apple logo on it. We expect at least one revolutionary feature. And the pricey Vision Pro has got one.
“I’m not even kidding, this eye tracking is sick”, said Marques Brownlee in a recent video. He added that it’s like the closest thing he’s experienced to magic.
Eye tracking is not some brand new technology, even for VR/AR headsets. For example, the PSVR2’s eye tracking boosts the graphical fidelity where you look, and kind of blurrs it everywhere – which brings it closer to the way our real eyes focus, as well as boost the visible graphics.
How Apple’s eye tracking technology stands out
Look at a thing, and click it. And it gets clicked! Remember how they revolutionized mobile phones with touch-screen display a decade and half ago, right? This is the next exponential level.
apps visible along with real environment with vision proclicking an app
No wonder why MKBHD calls it magical, right?
Apple achieves this impressive eye tracking functionality through the integration of a lot, lot of cameras, and multiple infrared LEDs within the mixed reality headset.
Traditionally, VR/AR headsets have relied on controllers or sensors to track user interactions. Apple’s mixed reality headset, instead, it utilizes eye-tracking cameras that precisely monitor the user’s gaze, and cameras that detects finger clicks. This means that users can simply look at an object or interface element and click on it using their naked fingers, without the need for any physical controllers. That’s something, isn’t it?
Retirement is a rapidly approaching reality, and its transforming future is becoming increasingly visible with technological advancement. For one, people are living longer and need to save more. And well, looks like the ‘golden years’ are now painted with a bit of silver lining, as 58% of Americans plan to work in retirement. Life is also becoming “more expensive” with healthcare costs rising.
On the flip side, retirement accounts are also experiencing a steady increase. As of 2023, the average 401(k) balance for individuals aged 65 and above has reached $203,000. Throughout history, there’s been a pretty steady increase in the number of employers providing matching retirement benefits too.
Interestingly enough, Gen Z, the youngest working generation, are leading in retirement savings (compared to their parents and grandparents). It’s clear that the current young people, who are indeed the present and the future, are aware that planning and saving early is crucial. This is a great sign overall for the nation’s economy. But how will they retire – what does the future of retirement actually look like?
Key Technological Breakthroughs that have impacted retirement
Printing press (15th century tech)
Back in the mid-15th century, Johannes Gutenberg revolutionized information dissemination with his invention of the printing press. It made producing books, pamphlets, and financial information (investments, loans, trade transactions, and other financial dealings) more accessible and affordable, allowing people to navigate their financial future.
The Steam Engine (1710s)
Invented by James Watt, the steam engine was a crucial development in the Industrial Revolution. This technology made it possible to power machinery and transportation in new ways, creating new industries and changing the nature of work. This may have had an impact on retirement (although retirement was not a term until the 1900s) by creating new job opportunities and changing the skills and knowledge needed for different types of work.
Telephone (1876)
The telephone revolutionized the way people communicated with each other. People were able to manage their finances remotely, contact financial advisors, and receive information about their investments. It also made it easier to stay connected with their loved ones, which indeed is an important aspect of emotional and mental well-being especially in post-work age.
Electric light (1879)
The invention of the electric light bulb by Thomas Edison in 1879 allowed for longer hours of productivity and leisure. Of course, it had a significant impact on financial future planning and post-work life, as it enabled people to stay up later and enjoy their leisure time with better lighting.
Automated Teller Machine (1967)
In 1967, Barclays Bank installed the first-ever Automated Teller Machine (ATM) in London. This revolutionized banking, making it easier for retirees to access their money and manage their finances.
Computerized financial planning tool (1969)
The first computerized financial planning tool, the Life Insurance Company Evaluation (LIFE) system, was developed by American Express in 1969. It calculated individuals’ life insurance needs and retirement income requirements.
Electronic stock exchange (1971)
1971 saw the launch of the National Association of Securities Dealers Automated Quotations (NASDAQ), the first electronic stock exchange. This made it easier and more efficient for retirees to invest in the stock market without the need for a physical trading floor.
Telemedicine service (1993)
In 1993, the first telemedicine service, Consult-a-Nurse, was launched, allowing patients to consult healthcare professionals over the phone. Retirees living in rural or remote areas could access healthcare services more easily.
Online Retirement calculator (1997)
The Vanguard Retirement Nest Egg Calculator provided retirees with a free, easy-to-use tool to estimate how much money they would need in retirement.
Mobile health app (2008)
The first mobile health app, iStethoscope, was developed in 2008. This allowed healthcare professionals to use their smartphones as a stethoscope, which could be used to listen to patients’ heartbeats and lung sounds. This made it easier and more efficient for patients to receive medical care, particularly for those who have difficulty traveling to a clinic or hospital. While this development was not specific to retirees, the concept has had a significant impact on the healthcare industry by providing greater accessibility to medical care for a wider range of patients, including those who may have difficulty traveling or have limited access to medical facilities.
Majority of retirees and pre-retirees are concerned about healthcare costs. But as technology can provide telemedicine and remote monitoring to reduce healthcare costs, the concerns are slowly decreasing. The use of wearable technology, for example, will only increase with time. And automation (of course) will impact retirement by both eliminating and creating jobs.
In this article, our main focus will be on technology’s impacts on various facets of retirement, with the broad impact of tech devices in the back seat.
1. Impact on what it means to be retired
Here’s what numbers say on different age groups’ current usage of technology:
As you can see above in the Table 1, people of the 65+ age group (who are most likely retired) are using very less tech in their retired lives. But guess what? The stats are about to change.
Virtual reality is already redefining retirement life. It has enabled people to explore the whole world without leaving their homes. [Funnily enough, the first VR headset came out in the 1960s.] The extent of VR’s impact on retirement will increase with time, but it’s not the only tech influencing retirement:
Retirement homes will be replaced by smart homes with assistive technology.
Robots will provide healthcare and companionship.
Autonomous vehicles will help retirees get around.
And retirees will be able to stay connected with family and friends through more than social media and video conferencing.
Tech overall will enable people to work remotely and supplement their retirement income.
The future of retirement life, as such, seems tempting. Of course, this technological excitement surrounding retirement is not without its drawbacks. As people retire earlier and work less, the economy may suffer. UK, for example, is suffering from labor shortages due to early retirement. The current reasons for “early retirement” there obviously vary. But yes, technology (especially VR) could make retirement life overly luxurious, hence tempting; doing harm to the overall economy.
2. The Correlation between Technology and human Lifespan
The future of retirement is intertwined with the correlation between technology and lifespan. In 1900, the average global lifespan was 31 years, and today, it is 76. Technology has enabled medical breakthroughs like antibiotics, vaccinations, and surgical procedures. It’s reducing deaths from infectious diseases, as well as infant/maternal mortality rates. The effect of tech here, as you see, is pretty straightforward. In 2019, BofA has predicted that (with the rise of medtech) the human lifespan could soon surpass the 100-year mark. This makes one wonder even more about the future of retirement. Also, how healthy the lifespan is going to be, is also going to be a crucial factor. The least we can expect is advancements in assistive technology helping older adults age in place and maintain their work life for longer. But much better would be actual humans maintaining good natural health to remain in the workforce.
3. Impact on retirement savings and investments
Savings is not the first thing that comes to mind when thinking about the impact of technology in retirement. Obviously so, but it’s an area where technology is having a transformative effect. Technology is enabling people to take greater control over their retirement savings. For example, online tools and calculators help individuals understand how much they need to save to meet their retirement goals. The global fintech market is expected to grow to $556.58 billion by 2030, which will provide new investment and financial planning tools for pre-retirees.
Ah, but wait! As they say, there’s always another side to a story:
High-frequency trading has turned the stock market into a wild beast, capable of sudden and violent movements. This raises significant challenges to long-term and retirement investors; ones that extend beyond mere financial losses. As technology evolves, new forms of trading will emerge, and the market could become even more frenetic. Also, the use of technology in retirement savings is already leading to over-reliance on automation and lack of human due diligence. And it tends to create a false sense of security [for example, is currently ultimately leading pre-retirees to invest heavily in speculative digital assets (like crypto) as a “hedge against inflation”, rather than saving].
To Conclude
Okay, below is an interactive chart comparing the historical median US retirement savings (since 1983) vs the NASDAQ-100 Technology Sector Index (NDXT) growth in the recent times (since 2007). Comparing the time taken by the both to triple, the progress in the technology sector is roughly 2-4 times faster than the growth in retirement savings:
Comparison Chart
So, in 1989, the average retirement savings account was $21,878, and it took 24 years for a threefold increase. However, the technology industry tripled in just 10 years from 2013 to 2023, and in fact, sextupled if we measure from 2007 (even if we exclude the sharp peak in 2019).
Overall, it’s not that technology disrupts retirement. Rather, technology and retirement both undergo gradual changes over time, with technology always being in the driver’s seat. All aspects of retirement are bound to digitize in the future. But the magnitude and immediacy of the impact ultimately depends on how exponential the technological growth indeed is.
1962’s Spacewar (created by MIT) is widelyrecognized as the first computer video game. But if we dig into the history, physicist William Higinbotham’s “Tennis for Two” was actually created in 1958. The claims differ due to varying definitions of “computer game”. Specifically, criteria for what qualifies as a game that is played on a computer may diverge.
We can still safely say that the first computer game was developed during the late 1950s and early 1960s. And as the 1960s are commonly used as a reference point to represent the origins of computer gaming, we’re going to adhere to that.
Okay, one thing is for sure – regardless of its specific content, the initial computer video game represented the inception of digital gaming, which includes handheld devices, arcade machines, and even gaming consoles.
This is the game we’re talking about —SpaceWar, of the 1960s:
Looks exactly what you would assume a 1960s game would look like, doesn’t it? Computer gaming is indeed not the only technological field we’ve come a long way since then.
Here’s the evolution of other technologies since the first computer game:
Technological Field
Evolution from the 1960s till Now
Computing overall
Mainframe computers → personal computers → internet and World Wide Web → cloud computing → big data analytics → artificial intelligence (AI)
Telecommunications
Satellite communication → mobile phones and cellular networks → internet and digital communication → 5G networks → internet of things (IoT)
Genetic engineering → gene editing → personalized medicine → completion of the Human Genome Project → rise of synthetic biology → AI in drug discovery
Aerospace
Manned missions to the moon → unmanned missions to Mars and beyond → reusable spacecraft (Space Shuttle) → commercial spaceflight → International Space Station → increased private investment in space exploration and development of new launch vehicles
Nanotechnology
Early research on quantum mechanics → development of the scanning tunneling microscope → nanomaterials and nanodevices → nanoscale manipulation and assembly → nanomedicine and nanobiotechnology
Now, using the 60s’ best computer game to represent the decade in technology, here’s exactly how far we’ve come:
Lunar Lander (60s’ best computer video game)A normal 2020s’ video game [Image Credit: PCGamesN]
Wasn’t this progress just inevitable?
First off, what progress? Computer gaming industry revenue (184.4 billion) now surpasses that of film and music combined. But in this section, the progress we’re talking is notably about the overall technological state, represented by gaming.
Well, let’s face it – the progress was about as inevitable as a dog chasing a tennis ball. Again, the decisive part, indeed, was the creation of the first computer video game itself; the creation of the concept. However, there are many other elements to it. Even though we had to reach where we are now, it could have taken a lot longer. Here are the factors that could have made or broke, quickened or slowed down the timeline of technological evolution (in this case, of PC video games):
a. The impact of other technologies
Technological advancements facilitate the growth of other technologies. The timeline of computer video games has been heavily contributed to by the same.
The mid-1990s saw the widespread adoption of internet. This global connectivity spawned MMOs and ultimately shaped the future of online gaming.
Mobile devices’ rise and growth led to surge in mobile gaming, expanding games’ reach and accessibility.
Artificial intelligence has always been crucial in gaming, improving gameplay, graphics, and player experience.
b. Social acceptance
Oddly enough, society has accepted gaming; it could’ve been a lot different if it hadn’t. With this acceptance increased the popularity of video games, and so did the demand for new and innovative games.
The North American video game industry suffered a severe crash in 1983, leading to skepticism from the public about their long-term viability.
Home computer gaming (and the internet) rose in the 1990s. This gradually forged the path for increased social acceptance of gaming as a legitimate pastime.
2023 – Gaming has become a major cultural force. Events like E3 and the Game Awards attract millions of viewers periodically.
c. Social Media
Social media was an unexpected turning point for computer video gaming. Especially so because social media itself had to be invented; it was not ‘discovered’.
The rise of YouTube and Twitch in the mid-2000s helped to popularize esports and live-streaming of gameplay.
Reddit and Discord have become important channels for gaming news and discussion.
Gaming would have been a lot different without social media, and its advancement would’ve been a protracted journey.
d. Economic incentives
Economic incentives undoubtedly play a crucial role in the growth of any technology or sector.
The US economic boom of the 1990s allowed for increased investment in game development and marketing.
So, gaming has come a long way since the first computer game, and it was quite bound to happen. Yet, the above five factors; “social acceptance, social media, economic incentives, gaming hardware innovation, and impact of other technologies”, not only drove the gaming industry but also created a clear road to drive on.