Connect with us

Technology

25 Biggest Moments in Search, From Helpful Images to AI

Published

on

Biggest Moments in Search

Here’s how we’ve made Search more helpful over 25 years — and had a little fun along the way, too.

When Google first launched 25 years ago, it was far from the first search engine. But quickly, Google Search became known for our ability to help connect people to the exact information they were looking for, faster than they ever thought possible.

Over the years, we’ve continued to innovate and make Google Search better every day. From creating entirely new ways to search, to helping millions of businesses connect with customers through search listings and ads (starting with a local lobster business advertising via AdWords in 2001), to having some fun with Doodles and easter eggs — it’s been quite a journey.

For our 25th birthday, we’re looking back at some of the milestones that made Google more helpful in the moments that matter, and played a big role in where Google is today. Learn more about our history in our Search Through Time site.

2001: Google Images

When Jennifer Lopez attended the 2000 Grammy Awards, her daring Versace dress became an instant fashion legend — and the most popular query on Google at the time. Back then, search results were just a list of blue links, so people couldn’t easily find the picture they were looking for. This inspired us to create Google Images.

2001: “Did you mean?”

“Did you mean,” with suggested spelling corrections, was one of our first applications of machine learning. Previously, if your search had a misspelling (like “floorescent”), we’d help you find other pages that had the same misspelling, which aren’t usually the best pages on the topic. Over the years we’ve developed new AI-powered techniques to ensure that even if your finger slips on the keyboard, you can find what you need.

2002: Google News

During the tragic events of September 11, 2001, people struggled to find timely information in Search. To meet the need for real-time news, we launched Google News the following year with links to a diverse set of sources for any given story.

2003: Easter eggs

Googlers have developed many clever Easter eggs hidden in Search over the years. In 2003, one of our first Easter eggs gave the answer to life, the universe and everything, and since then millions of people have turned their pages askew, done a barrel roll, enjoyed a funny recursive loop and celebrated moments in pop culture.

One of our earliest Easter eggs is still available on Search.

2004: Autocomplete

Wouldn’t it be nice to type as quickly as you think? Cue Autocomplete: a feature first launched as “Google Suggest” that automatically predicts queries in the search bar as you start typing. Today, on average, Autocomplete reduces typing by 25% and saves an estimated over 200 years of typing time per day.

2004: Local information

People used to rely on traditional phone books for business information. The web paved the way for local discovery, like “pizza in Chicago” or “haircut 75001.” In 2004, Google Local added relevant information to business listings like maps, directions and reviews. In 2011, we added click to call on mobile, making it easy to get in touch with businesses while you’re on the go. On average, local results in Search drive more than 6.5 billion connections for businesses every month, including phone calls, directions, ordering food and making reservations.

2006: Google Translate

Google researchers started developing machine translation technology in 2002 to tackle language barriers online. Four years later, we launched Google Translate with text translations between Arabic and English. Today, Google Translate supports more than 100 languages, with 24 added last year.

2006: Google Trends

Google Trends was built to help us understand trends on Search with aggregated data (and create our annual Year in Search). Today, Google Trends is the world’s largest free dataset of its kind, enabling journalists, researchers, scholars and brands to learn how searches change over time.

2007: Universal Search

Helpful search results should include relevant information across formats, like links, images, videos, and local results. So we redesigned our systems to search all of the content types at once, decide when and where results should blend in, and deliver results in a clear and intuitive way. The result, Universal Search, was our most radical change to Search at the time.

2008: Google Mobile App

With the arrival of Apple’s App Store, we launched our first Google Mobile App on iPhone. Features like Autocomplete and “My Location” made search easier with fewer key presses, and were especially helpful on smaller screens. Today, there’s so much you can do with the Google app — available on both Android and iOS — from getting help with your math homework with Lens to accessing visual translation tools in just a tap.

2008: Voice Search

In 2008, we introduced the ability to search by voice on the Google Mobile App, expanding to desktop in 2011. With Voice Search, people can search by voice with the touch of a button. Today, search by voice is particularly popular in India, where the percentage of Indians doing daily voice queries is nearly twice the global average.

2009: Emergency Hotlines

Following a suggestion from a mother who had a hard time finding poison control information after her daughter swallowed something potentially dangerous, we created a box for the poison control hotline at the top of the search results page. Since this launch, we’ve elevated emergency hotlines for critical moments in need like suicide prevention.

2011: Search by Image

Sometimes, what you’re searching for can be hard to describe with words. So we launched Search by Image so you can upload any picture or image URL, find out what it is and where else that image is on the web. This update paved the way for Lens later on.

2012: Knowledge Graph

We introduced the Knowledge Graph, a vast collection of people, places and things in the world and how they’re related to one another, to make it easier to get quick answers. Knowledge Panels, the first feature powered by the Knowledge Graph, give you a quick snapshot of information about topics like celebrities, cities and sports teams.

2015: Popular Times: We launched the Popular Times feature in Search and Maps to help people see the busiest times of the day when they search for places like restaurants, stores, and museums.

2016: Discover

By launching a personalized feed (now called Discover) we helped people explore content tailored to their interests right in the Google app, without having to search.

2017: Lens

Google Lens turns your camera into a search query by looking at objects in a picture, comparing them to other images, and ranking those other images based on their similarity and relevance to the original picture. Now, you can search what you see in the Google app. Today, Lens sees more than 12 billion visual searches per month.

2018: Flood forecasting

To help people better prepare for impending floods, we created forecasting models that predict when and where devastating floods will occur with AI. We started these efforts in India and today, we’ve expanded flood warnings to 80 countries.

2019: BERT

A big part of what makes Search helpful is our ability to understand language. In 2018, we introduced and open-sourced a neural network-based technique to train our language understanding models: BERT (Bidirectional Encoder Representations from Transformers). BERT makes Search more helpful by better understanding language, meaning it considers the full context of a word. After rigorous testing in 2019, we applied BERT to more than 70 languages.  Learn more about how BERT works to understand your searches.

2020: Shopping Graph

Online shopping became a whole lot easier and more comprehensive when we made it free for any retailer or brand to show their products on Google. We also introduced Shopping Graph, an AI-powered dataset of constantly-updating products, sellers, brands, reviews and local inventory that today consists of 35 billion product listings.

2020: Hum to Search

We launched Hum to Search in the Google app, so you’ll no longer be frustrated when you can’t remember the tune that’s stuck in your head. The machine learning feature identifies potential song matches after you hum, whistle or sing a melody. You can then explore information on the song and artist.

2021: About this result

To help people make more informed decisions about which results will be most useful and reliable for them, we added “About this result” next to most search results. It explains why a result is being shown to you and gives more context about the content and its source, based on best practices from information literacy experts. ‘About this’ result is now available in all languages where Search is available.

2022: Multisearch

To help you uncover the information you’re looking for — no matter how tricky — we created an entirely new way to search with text and images simultaneously through Multisearch. Now you can snap a photo of your dining set and add the query “coffee table” to find a matching table. First launched in the U.S., Multisearch is now available globally on mobile, in all languages and countries where Lens is available.

2023: Search Labs & Search Generative Experience (SGE)

Every year in Search, we do hundreds of thousands of experiments to figure out how to make Google more helpful for you. With Search Labs, you can test early-stage experiments and share feedback directly with the teams working on them. The first experiment, SGE, brings the power of generative AI directly into Search. You can get the gist of a topic with AI-powered overviews, pointers to explore more and natural ways to ask follow ups. Since launching in the U.S., we’ve rapidly added new capabilities, with more to come.

As someone who’s been following the world of search engines for more than two decades, it’s amazing to reflect on where Google started — and how far we’ve come.

Technology

Our Goal is to Meet Soaring Demand for Connectivity—MTN

Published

on

MTN Nigeria commercial paper sales

By Dipo Olowookere

The Chief Strategy and Innovation Officer for MTN Nigeria, Mr Babalola Oyeleye, has disclosed that the telecommunications company intends to expand its infrastructure to give its customers quality service.

The demand for connectivity in Nigeria is growing, and with a new forecast predicting the Internet of Things (IoT) market to reach $38.7 billion by 2030, stakeholders, especially operators, are already positioning themselves to dominate the space

Government and private sector investments in digital transformation have created an ecosystem that includes system integrators and security specialists. Industries such as utilities and agriculture are leading the charge, adopting IoT to solve localised problems like power theft and low crop yields.

Currently, 4G coverage has reached approximately 80 per cent of Nigeria’s population, with 5G services already in major cities like Lagos, Abuja, Port Harcourt, and Kano. This connectivity backbone is essential for the low-latency communication required by millions of connected devices.

“Reaching the $38.7 billion mark isn’t just about the numbers; it’s about the millions of data points helping Nigerian SMEs and large corporations make smarter decisions every day. Our goal is to ensure the connectivity is there to meet this soaring demand,” Mr Oyeleye noted.

As the ecosystem matures, the focus is shifting toward all-in-one solutions that simplify the user experience. With ongoing investments in NB-IoT (Narrowband IoT) and other low-power connectivity options, the next five years are set to see an explosion in smart city and smart home applications across the country.

Continue Reading

Technology

Refiant AI Raises $5m to Cut AI Energy Use

Published

on

Refiant AI

By Adedapo Adesanya

South African-founded Refiant AI has raised $5 million to slash the energy footprint of artificial intelligence (AI) in a seed round led by VoLo Earth Ventures, a top climate technology fund.

The startup uses nature-inspired algorithms to radically compress AI models, slashing the hardware and energy required to run them. The new fund will be used to scale Refiant’s team – which already includes a former Google Cloud architect, a Cambridge PhD researcher, and an engineer with NASA experience – to build out a platform and to accelerate enterprise partnerships.

According to a statement shared with Business Post, the company is in active conversations with several multinational technology firms exploring how Refiant’s approach could reduce their AI compute costs while maintaining data and energy sovereignty.

“AI’s growing energy footprint is one of the most urgent and underappreciated challenges in the climate space,” said Mr Sid Gutta, the company’s co-founder. “The industry’s default answer is to build more data centres and consume more power. Ours is to make the AI itself dramatically more efficient.”

The company said it has already successfully demonstrated it can compress a 120 billion parameter AI model to run on a standard laptop, reducing energy requirements by over 80 per cent while preserving near-identical quality. It achieved this to run on a MacBook Pro with just 12GB of RAM. The same model would normally require hardware with at least 80GB of memory. The model retained 95-99 per cent of its fidelity, ran alongside a second AI model on the same machine, and the entire process took four hours with no cloud computing required.

For Refiant, its approach will help businesses reduce their carbon footprint and adopt AI to stay competitive. The energy required to process a single AI prompt on standard infrastructure could power roughly 100 equivalent prompts using Refiant’s approach.

The current breakthrough results were attained at the end of last year, and since then, the team have been gearing up to demonstrate successfully exceeding these results with further compression, longer context windows and model traceability.

“The AI industry is spending hundreds of billions scaling infrastructure when the real breakthrough is the ability to do more with radically less,” said Mr Viroshan Naicker, co-Founder and a mathematician with published research in networks and quantum systems. “Nature doesn’t build by brute force. Evolution optimises. We’ve applied that principle to AI – and the results speak for themselves.”

“AI’s biggest constraint isn’t demand – it’s energy,” added Mr Joseph Goodman, Managing Partner, VoLo Earth. “What’s been missing is a fundamentally more efficient way to compute. Refiant’s architecture replaces brute-force scaling with a far more efficient, nature-inspired approach that lowers energy use while increasing capability. That’s the kind of breakthrough needed to make AI sustainable on a global scale.”

Continue Reading

Technology

Google, UpSkill Universe Revamp Hustle Academy to Bring Free AI Skills to Africans

Published

on

Google Hustle Academy

By Adedapo Adesanya

Google and UpSkill Universe, Sub-Saharan Africa’s leading AI and business skills training partner, have announced a major redesign of the Google Hustle Academy programme. For the first time, the free training initiative is open to everyone, not just business owners.

The new curriculum is focused on equipping individuals and entrepreneurs with practical AI skills and comes at a time when small businesses have become the engine of Africa’s economy, creating over 80 per cent of jobs on the continent. To help them grow, the Hustle Academy was launched in 2022, providing bootcamp-style training on business strategy, digital skills, AI, and leadership. The program has since trained over 18,000 SMEs, with many reporting increased revenue and job creation.

Now, as AI reshapes the job market, the program is evolving. The 2026 edition is built for anyone in Sub-Saharan Africa, including employees, students, and job seekers, who want to use AI to advance their careers. To meet the needs of a diverse audience, the new format includes short, 60-minute webinars and more immersive, high-impact bootcamps. These sessions are laser-focused on putting AI to work immediately in areas like digital commerce, marketing, and growth strategy.

Speaking about the academy, Mr Gori Yahaya, Founder & CEO of UpSkill Universe, said, “The 2026 Hustle Academy is designed to close the AI Skills gap with hands-on training that is short, focused, and immediately useful. AI is reshaping how businesses win and how careers are built, right across this continent. We’re excited to renew our partnership, now in its fifth year with Google, combining their global AI leadership with our deep regional AI expertise. The next wave of AI leaders will come from this continent. We are making sure they are ready.”

The Hustle Academy initiative has strengthened digital competitiveness across emerging African economies by enabling SMEs to move beyond AI awareness to practical implementation, positioning them for sustained growth in an increasingly AI-driven business environment.

“We believe that the future of Africa’s digital economy lies in the hands of individuals and entrepreneurs alike. Our new strategy focuses on scaling reach by training individuals in the latest AI-centred tools and techniques,” said a Google representative.

Applications for the 2026 cohort are now open. Interested participants can apply at: https://rsvp.withgoogle.com/events/hustle-academy

Continue Reading

Trending