Technology
25 Biggest Moments in Search, From Helpful Images to AI
Here’s how we’ve made Search more helpful over 25 years — and had a little fun along the way, too.
When Google first launched 25 years ago, it was far from the first search engine. But quickly, Google Search became known for our ability to help connect people to the exact information they were looking for, faster than they ever thought possible.
Over the years, we’ve continued to innovate and make Google Search better every day. From creating entirely new ways to search, to helping millions of businesses connect with customers through search listings and ads (starting with a local lobster business advertising via AdWords in 2001), to having some fun with Doodles and easter eggs — it’s been quite a journey.
For our 25th birthday, we’re looking back at some of the milestones that made Google more helpful in the moments that matter, and played a big role in where Google is today. Learn more about our history in our Search Through Time site.
2001: Google Images
When Jennifer Lopez attended the 2000 Grammy Awards, her daring Versace dress became an instant fashion legend — and the most popular query on Google at the time. Back then, search results were just a list of blue links, so people couldn’t easily find the picture they were looking for. This inspired us to create Google Images.
2001: “Did you mean?”
“Did you mean,” with suggested spelling corrections, was one of our first applications of machine learning. Previously, if your search had a misspelling (like “floorescent”), we’d help you find other pages that had the same misspelling, which aren’t usually the best pages on the topic. Over the years we’ve developed new AI-powered techniques to ensure that even if your finger slips on the keyboard, you can find what you need.

2002: Google News
During the tragic events of September 11, 2001, people struggled to find timely information in Search. To meet the need for real-time news, we launched Google News the following year with links to a diverse set of sources for any given story.
2003: Easter eggs
Googlers have developed many clever Easter eggs hidden in Search over the years. In 2003, one of our first Easter eggs gave the answer to life, the universe and everything, and since then millions of people have turned their pages askew, done a barrel roll, enjoyed a funny recursive loop and celebrated moments in pop culture.

One of our earliest Easter eggs is still available on Search.
2004: Autocomplete
Wouldn’t it be nice to type as quickly as you think? Cue Autocomplete: a feature first launched as “Google Suggest” that automatically predicts queries in the search bar as you start typing. Today, on average, Autocomplete reduces typing by 25% and saves an estimated over 200 years of typing time per day.
2004: Local information
People used to rely on traditional phone books for business information. The web paved the way for local discovery, like “pizza in Chicago” or “haircut 75001.” In 2004, Google Local added relevant information to business listings like maps, directions and reviews. In 2011, we added click to call on mobile, making it easy to get in touch with businesses while you’re on the go. On average, local results in Search drive more than 6.5 billion connections for businesses every month, including phone calls, directions, ordering food and making reservations.
2006: Google Translate
Google researchers started developing machine translation technology in 2002 to tackle language barriers online. Four years later, we launched Google Translate with text translations between Arabic and English. Today, Google Translate supports more than 100 languages, with 24 added last year.

2006: Google Trends
Google Trends was built to help us understand trends on Search with aggregated data (and create our annual Year in Search). Today, Google Trends is the world’s largest free dataset of its kind, enabling journalists, researchers, scholars and brands to learn how searches change over time.
2007: Universal Search
Helpful search results should include relevant information across formats, like links, images, videos, and local results. So we redesigned our systems to search all of the content types at once, decide when and where results should blend in, and deliver results in a clear and intuitive way. The result, Universal Search, was our most radical change to Search at the time.
2008: Google Mobile App
With the arrival of Apple’s App Store, we launched our first Google Mobile App on iPhone. Features like Autocomplete and “My Location” made search easier with fewer key presses, and were especially helpful on smaller screens. Today, there’s so much you can do with the Google app — available on both Android and iOS — from getting help with your math homework with Lens to accessing visual translation tools in just a tap.
2008: Voice Search
In 2008, we introduced the ability to search by voice on the Google Mobile App, expanding to desktop in 2011. With Voice Search, people can search by voice with the touch of a button. Today, search by voice is particularly popular in India, where the percentage of Indians doing daily voice queries is nearly twice the global average.

2009: Emergency Hotlines
Following a suggestion from a mother who had a hard time finding poison control information after her daughter swallowed something potentially dangerous, we created a box for the poison control hotline at the top of the search results page. Since this launch, we’ve elevated emergency hotlines for critical moments in need like suicide prevention.
2011: Search by Image
Sometimes, what you’re searching for can be hard to describe with words. So we launched Search by Image so you can upload any picture or image URL, find out what it is and where else that image is on the web. This update paved the way for Lens later on.
2012: Knowledge Graph
We introduced the Knowledge Graph, a vast collection of people, places and things in the world and how they’re related to one another, to make it easier to get quick answers. Knowledge Panels, the first feature powered by the Knowledge Graph, give you a quick snapshot of information about topics like celebrities, cities and sports teams.

2015: Popular Times: We launched the Popular Times feature in Search and Maps to help people see the busiest times of the day when they search for places like restaurants, stores, and museums.
2016: Discover
By launching a personalized feed (now called Discover) we helped people explore content tailored to their interests right in the Google app, without having to search.
2017: Lens
Google Lens turns your camera into a search query by looking at objects in a picture, comparing them to other images, and ranking those other images based on their similarity and relevance to the original picture. Now, you can search what you see in the Google app. Today, Lens sees more than 12 billion visual searches per month.
2018: Flood forecasting
To help people better prepare for impending floods, we created forecasting models that predict when and where devastating floods will occur with AI. We started these efforts in India and today, we’ve expanded flood warnings to 80 countries.

2019: BERT
A big part of what makes Search helpful is our ability to understand language. In 2018, we introduced and open-sourced a neural network-based technique to train our language understanding models: BERT (Bidirectional Encoder Representations from Transformers). BERT makes Search more helpful by better understanding language, meaning it considers the full context of a word. After rigorous testing in 2019, we applied BERT to more than 70 languages. Learn more about how BERT works to understand your searches.
2020: Shopping Graph
Online shopping became a whole lot easier and more comprehensive when we made it free for any retailer or brand to show their products on Google. We also introduced Shopping Graph, an AI-powered dataset of constantly-updating products, sellers, brands, reviews and local inventory that today consists of 35 billion product listings.
2020: Hum to Search
We launched Hum to Search in the Google app, so you’ll no longer be frustrated when you can’t remember the tune that’s stuck in your head. The machine learning feature identifies potential song matches after you hum, whistle or sing a melody. You can then explore information on the song and artist.
2021: About this result
To help people make more informed decisions about which results will be most useful and reliable for them, we added “About this result” next to most search results. It explains why a result is being shown to you and gives more context about the content and its source, based on best practices from information literacy experts. ‘About this’ result is now available in all languages where Search is available.
2022: Multisearch
To help you uncover the information you’re looking for — no matter how tricky — we created an entirely new way to search with text and images simultaneously through Multisearch. Now you can snap a photo of your dining set and add the query “coffee table” to find a matching table. First launched in the U.S., Multisearch is now available globally on mobile, in all languages and countries where Lens is available.
2023: Search Labs & Search Generative Experience (SGE)
Every year in Search, we do hundreds of thousands of experiments to figure out how to make Google more helpful for you. With Search Labs, you can test early-stage experiments and share feedback directly with the teams working on them. The first experiment, SGE, brings the power of generative AI directly into Search. You can get the gist of a topic with AI-powered overviews, pointers to explore more and natural ways to ask follow ups. Since launching in the U.S., we’ve rapidly added new capabilities, with more to come.
As someone who’s been following the world of search engines for more than two decades, it’s amazing to reflect on where Google started — and how far we’ve come.
Technology
Lagos’ Team Nevo Wins 3MTT Southwest Regional Hackathon
By Adedapo Adesanya
Lagos State’s representative, Team Nevo, won the 3 Million Technical Talent (3MTT) South-West Regional Hackathon, on Tuesday, December 9, 2025.
The host state took the victory defeating pitches from other south west states, including Oyo, Ogun, Osun, Ekiti, and Ondo States.
This regional hackathon was a major moment for the 3MTT Programme, bringing together young innovators from across the South-West to showcase practical solutions in AI, software development, cybersecurity, data analysis, and other key areas of Nigeria’s digital future.
Launched by the Federal Ministry of Communications, Innovation, and Digital Economy, the hackathon brought together talented young innovators from across the Southwest region to showcase their digital solutions in areas such as Artificial Intelligence (AI)/Machine Learning, software development, data analysis, and cybersecurity, among others.
“This event not only highlights the potential of youth in South West but also advances the digital economy, fosters innovation, and creates job opportunities for our young people,” said Mr Oluwaseyi Ayodele, the Lagos State Community Manager.
Winning the hackaton was Team Nevo, made up of Miss Lydia Solomon and Mr Teslim Sadiq, whose inclusive AI learning tool which tailors academic learning experiences to skill sets of students got the top nod, with N500,000 in prize money.
Team Oyo represented by Microbiz, an AI business tool solution, came in second place winning N300,000 while Team Ondo’s Fincoach, a tool that guides individuals and businesses in marking smarter financial decisions, came third with N200,000 in prize money.
Others include The Frontiers (Team Osun), Ecocycle (Team Ogun), and Mindbud (Team Ekiti).
Speaking to Business Post, the lead pitcher for Team Nevo, Miss Solomon, noted, “It was a very lovely experience and the opportunity and access that we got was one of a kind,” adding that, “Expect the ‘Nevolution’ as we call it, expect the transformation of the educational sector and how Nevo is going to bring inclusion and a deeper level of understanding and learning to schools all around Nigeria.”
Earlier, during his keynote speech, the chief executive officer (CEO) of Sterling Bank, Mr Abubakar Suleiman, emphasised the need for Nigeria’s budding youth population to tap into the country’s best comparative advantage, drawing parallels with commodities and resources like cocoa, soyabeans, and uranium.
“Tech is our best bet to architect a comparative advantage. The work we are doing with technologies are very vital to levelling the playing field.”
Technology
re:Invent 2025: AWS Excites Tech Enthusiasts With Graviton5 Unveiling
By Aduragbemi Omiyale
One of the high points of the 2025 re:Invent was the unveiling of Graviton5, the fifth generation of custom Arm-based server processors from Amazon Web Services (AWS).
Many tech enthusiasts believe that the company pushed the limits with Graviton5, its most powerful and efficient CPU, frontier agents that can work autonomously for days, an expansion of the Amazon Nova model family, Trainium3 UltraServers, and AWS AI Factories suitable for implementing AI infrastructure in customers’ existing data centres.
Graviton5—the company’s most powerful and efficient CPU
As cloud workloads grow in complexity, organizations face a persistent challenge to deliver faster performance at lower costs and meet sustainability commitments without trade-offs.
AWS’ new Graviton5-based Amazon EC2 M9g delivers up to 25% higher performance than its previous generation, with 192 cores per chip and 5x larger cache.
For the third year in a row, more than half of new CPU capacity added to AWS is powered by Graviton, with 98 per cent of the top 1,000 EC2 customers—including Adobe, Airbnb, Epic Games, Formula 1, Pinterest, SAP, and Siemens—already benefiting from Graviton’s price performance advantages.
Expansion of Nova family of models and pioneers “open training” with Nova Forge
Amazon is expanding its Nova portfolio with four new models that deliver industry-leading price-performance across reasoning, multimodal processing, conversational AI, code generation, and agentic tasks. Nova Forge pioneers “open training,” giving organizations access to pre-trained model checkpoints and the ability to blend proprietary data with Amazon Nova-curated datasets.
Nova Act achieves breakthrough 90% reliability for browser-based UI automation workflows built by early customers. Companies like Reddit are using Nova Forge to replace multiple specialized models with a single solution, while Hertz accelerated development velocity by 5x with Nova Act.
Addition of 3 frontier agents, a new class of AI agents that work as an extension of your software development team
Frontier agents represent a step-change in what agents can do. They’re autonomous, scalable, and can work for hours or days without intervention. AWS announced three frontier agents—Kiro autonomous agent, AWS Security Agent, and AWS DevOps Agent. Kiro autonomous agent acts as a virtual developer for your team, AWS Security Agent is your own security consultant, and AWS DevOps Agent is your on-call operational team.
Companies, including Commonwealth Bank of Australia, SmugMug, and Wester Governors University have used one or more of these agents to transform the software development lifecycle.
Unveiling Trainium3 UltraServers
As AI models grow in size and complexity, training cutting-edge models requires infrastructure investments that only a handful of organizations can afford.
Amazon EC2 Trn3 UltraServers, powered by AWS’s first 3nm AI chip, pack up to 144 Trainium3 chips into a single integrated system, delivering up to 4.4x more compute performance and 4x greater energy efficiency than Trainium2 UltraServers.
Customers achieve 3x higher throughput per chip while delivering 4x faster response times, reducing training times from months to weeks. Customers including Anthropic, Karakuri, Metagenomi, NetoAI, Ricoh, and Splash Music are reducing training and inference costs by up to 50 per cent with Trainium, while Decart is achieving 4x faster inference for real-time generative video at half the cost of GPUs, and Amazon Bedrock is already serving production workloads on Trainium3.
Technology
NITDA Alerts Nigerians to ChatGPT Vulnerabilities
By Adedapo Adesanya
The National Information Technology Development Agency (NITDA) has issued an advisory on new vulnerabilities in ChatGPT that could expose users to data-leakage attacks.
According to the advisory, researchers discovered seven vulnerabilities affecting GPT-4o and GPT-5 models that allow attackers to manipulate ChatGPT through indirect prompt injection.
The agency explained that hidden instructions placed inside webpages, comments, or Uniform Resource Locators (URLs) can trigger unintended commands during regular browsing, summarisation, or search actions.
“By embedding hidden instructions in webpages, comments, or crafted URLs, attackers can cause ChatGPT to execute unintended commands simply through normal browsing, summarization, or search actions,” they stated.
The warning followed rising concerns about AI-powered tools interacting with unsafe web content and the growing dependence on ChatGPT for business, research, and public-sector tasks.
NITDA added that some flaws allow the bypassing of safety controls by masking malicious content behind trusted domains.
Other weaknesses take advantage of markdown rendering bugs, enabling hidden instructions to pass undetected.
It explained that in severe cases, attackers can poison ChatGPT’s memory, forcing the system to retain malicious instructions that influence future conversations
They stated that while OpenAI has fixed parts of the issue, Large-Language Models (LLMs) still struggle to reliably separate genuine user intent from malicious data.
The Agency warned that these vulnerabilities could lead to a range of cybersecurity threats, including unauthorised actions carried out by the model; unintended exposure of user information; manipulated or misleading outputs; and long-term behavioural changes caused by memory poisoning, among others.
It advised Nigerians, businesses, and government institutions to adopt several precautionary steps to stay safe. These include limiting or disabling the browsing and summarisation of untrusted websites within enterprise environments and enabling features like browsing or memory only when necessary.
It also recommended regular updates to deployed GPT-4o and GPT-5 models to ensure known vulnerabilities are patched.
-
Feature/OPED6 years agoDavos was Different this year
-
Travel/Tourism9 years ago
Lagos Seals Western Lodge Hotel In Ikorodu
-
Showbiz3 years agoEstranged Lover Releases Videos of Empress Njamah Bathing
-
Banking7 years agoSort Codes of GTBank Branches in Nigeria
-
Economy3 years agoSubsidy Removal: CNG at N130 Per Litre Cheaper Than Petrol—IPMAN
-
Banking3 years agoFirst Bank Announces Planned Downtime
-
Banking3 years agoSort Codes of UBA Branches in Nigeria
-
Sports3 years agoHighest Paid Nigerian Footballer – How Much Do Nigerian Footballers Earn











