Connect with us

Technology

Google Experts Answer Your Top Most Searched Questions on AI

Published

on

Olumide Balogun Most Searched Questions on AI

New search trends released by Google show that search interest in AI has reached an all-time high in Nigeria. The trends show that people have searched for AI more than ever in 2023 so far, with interest rising 310% since last year, and by 1,660% in the last five years.

Google’s research also revealed the top trending questions being asked about AI across Nigeria. Here, Google West Africa Director, Olumide Balogun, answers some of the most frequently asked questions.

  1. What is Artificial Intelligence and how does it work?

AI is a type of technology that can learn from its environment, experiences and people and that can understand patterns and make projections better than any previous technology before it.

AI models are trained and created by human engineers, who input data into the AI system to train it. For example, in 2012, we showed an AI model thousands of videos of cats on YouTube so that it could learn to recognize a cat. Now, with advancements in technology, we could give an AI model hundreds of books on animals to read – and, using those, it would be able to describe a cat to us on its own despite having never been shown one.

Once AI systems are trained, they’re tested to see if they work well. You can do this by asking the AI model to describe or recognise a cat, for example, or even to generate a picture of one for you. Training AI models can take a long time – but once they work, they can be deployed into production so that you can use them at home.

  1. When did AI start?

AI can be traced back to the early 1950s, when Alan Turing – a British mathematician – published a paper on Computing Machinery and Intelligence. That kick-started the principles behind AI – but the first time anyone used the term was likely in 1956 when John McCarthy hosted a conference at Dartmouth College called the Dartmouth Summer Research Project in Artificial Intelligence.

So AI is not new – in fact, AI research has been accelerating since the 1990s. Google itself became an AI-first company back in 2015. But the pace of AI development is accelerating – with more households able to access generative AI tools like text-to-image generators or chatbots – which has made AI a household phrase for maybe the first time ever.

  1. Where is AI used?

AI has always been integral to many daily tools, from Google Translate to antilock braking in cars. Its transformative power, however, is being harnessed more profoundly now. In the heart of this evolution is the Google AI Centre in Accra, laser-focused on Africa’s unique challenges and aspirations. While innovations like Google DeepMind’s AlphaFold impact global biotech, in Africa, we’re taking strides that resonate with local needs. We’re collaborating to map remote buildings for better planning, using AI to predict challenges like locust outbreaks and enhancing maternal health via AI-powered ultrasound.

AI’s potential in sustainability is vast. In Africa, it’s about thriving industries that respect our rich biodiversity. While the global health community benefits from protein sequence mapping, for Africa, it’s a hope against diseases like malaria.

  1. What can AI do and how can I use it?

Think of AI as a tool that’s really good at understanding patterns and making projections – better than any computer has been before – and that’s been taught to learn from its environment, experiences and people. When you put that ability to good use, you can use AI to do all sorts of amazing things, like helping doctors to screen for and identify cancer, predicting and monitoring natural disasters, or helping businesses to identify and reduce their carbon emissions.

You’re probably using AI all the time already without realising it. But you can now also use AI to help boost your productivity with experimental language tools like Bard, to translate even more languages on Google Translate, or to find the most fuel-efficient route on Google Maps.

  1. Is AI dangerous?

AI is like any other technology in that it can be used for good or bad, depending on the user. On the one hand, it has incredible potential to be used in ways that are beneficial for society – whether it’s protecting people from spam and fraud, translating hundreds more languages, or forecasting floods up to seven days in advance. But it can also be used to amplify current societal issues – like misinformation and discrimination.

It’s really important that we get these tools right, working together to ensure we’re creating and using them responsibly. That means governments are introducing regulations to help us seize the benefits of AI while mitigating the risks, as well as companies developing shared sets of standards and principles. At Google, we’re also led by our own AI Principles – which you can read online – to make sure we’re developing AI that is beneficial for society. 

  1. Will AI take my job?

As technology has developed, so too has the job market. At the beginning of the last century, people mostly worked in agriculture. Now we have hedge fund managers, cabin crews aboard widely accessible commercial flights – and, as recently as 1995, web designers. So we’ve had these questions for a long time, and, as a society, we’ve navigated them well.

That’s not to underestimate the potential of AI – which is essentially the ‘third wave’ of digital technology after the internet and mobile phones. It will be brilliant for people’s productivity and for economic opportunity – but it will also cause some levels of disruption. We’ll see a whole set of jobs that can grow – but the most profound change will be how many of our jobs will be assisted by technology. AI will become a partner to many of us, helping us not just to make the repetitive tasks of our work more efficient but also sparking creativity and enabling us to spend more time on the bits of our jobs that we love and that challenge us. We’re already working with people to help them learn how AI can help them. Our Grow with Google programs have trained 7 million people and helped to close the digital skills gap in Africa. Governments, NGOs, and the private sector can work together to bring similar schemes about – ensuring that everyone can benefit from AI.

Technology

ipNX, NCC to Drive Inclusive Digital Growth Across Nigeria

Published

on

ipNX Nigeria NCC

By Aduragbemi Omiyale

A leading Information and Communications Technology (ICT) company, ipNX Nigeria, is joining forces with the Nigerian Communications Commission (NCC) to accelerate broadband penetration and drive inclusive digital growth across the country.

Recently, an executive delegation of the organisation paid a visit to the chairman of the regulatory agency, Mr Idris Olorunimbe.

“We are pleased to engage with the new chairman of the NCC and show our support as he takes on this important role.

“Strong leadership and a clear policy direction are essential to unlocking the full potential of Nigeria’s digital economy.

“At ipNX, we remain committed to working closely with the commission and other stakeholders to expand broadband access, enhance connectivity in educational institutions, and ultimately bridge the digital divide.

“This collaboration will empower millions of Nigerians and further position the country as a leader in Africa’s technological evolution,” the Managing Director of ipNX Nigeria, Mr Ejovi Aror, said at the visit.

In his remarks, Mr Olorunnimbe thanked the firm for the show of support, reiterating the commission’s commitment to fostering an enabling environment for private sector participation in achieving universal broadband access across Nigeria.

This collaboration is expected to advance Nigeria’s transformation agenda in technology and help boost the federal government’s broadband agenda for the country.

ipNX Nigeria has said it remains at the forefront of delivering cutting-edge broadband and ICT solutions, and this engagement underscores its unwavering dedication to supporting national development through technology-driven initiatives.

Continue Reading

Technology

MTN Nigeria to Offload 60% Stake in MoMo PSB, YDFS for N95.5bn

Published

on

mtn data centre

By Adedapo Adesanya

MTN Nigeria is restructuring its fintech business by bringing in its parent company, MTN Group, as a major investor to help cushion against losses that have plagued the units.

Yesterday, MTN Nigeria announced that its parent firm, based in South Africa, will acquire a 60 per cent stake in MoMo Payment Service Bank Limited (MoMo PSB) and Y’ello Digital Financial Services (YDFS) Limited.

MoMo is a payment service bank business that provides financial services, including deposits, payments, transfers and digital wallets to individuals and small businesses in Nigeria via digital and mobile‑based platforms.

Y’ello Digital is a licensed super-agent that provides agency banking and financial services, including cash deposits, withdrawals and bill payments. It operates through the MoMo network.

In an explanatory note in respect of the proposed transaction on Tuesday, MTN Nigeria said the transaction will cost N95.5 billion and reduce its exposure to the “loss-making” financial technology (fintech) companies.

According to the Nigerian subsidiary, the acquisition, which the South African company will conduct through another subsidiary, MTN Group Fintech, is a restructuring that consists of two phases.

MTN Nigeria said the first phase is the acquisition of a 60 per cent stake in each of the two fintech companies by MTN Group.

“MTN Group Fintech will acquire a 60 per cent stake in each of the Fintech Companies through a combination of primary issuance of shares by the Fintech Companies and a secondary acquisition of shares in MoMo PSB from MTN Nigeria, at an agreed valuation of N95.5 billon (on an intra-group debt free and cash free basis), resulting in an implied capital injection of N152.06 billion payable in cash or consideration other than cash, or a combination (the “Investment Amount”) into the Fintech Companies; and MTN Nigeria will retain a 40% stake in the Fintech Companies,” the statement read.

According to the explanatory note, the second phase is the creation of a financial holding company named Fintech HoldCo, which will be 60 per cent owned by MTN Group Fintech and 40 per cent owned by MTN Nigeria.

The fintech units are currently loss-making, and this move will help MTN Nigeria to reduce financial risk and share future losses and investment burden. However, it will still keep a significant minority stake (40 per cent)

The network provider said the transaction phase will be completed with Fintech HoldCo acquiring the shares held by MTN Group Fintech and MTN Nigeria in MoMo and Y’ello Digital.

“Subject to obtaining the approval of the CBN, Fintech HoldCo will become the 100% owner of the shares in the Fintech Companies, having acquired all the shares held respectively by MTN Group Fintech and MTN Nigeria in the Fintech Companies,” the telecommunications company said.

MTN Nigeria said an annual general meeting (AGM) will be held on April 30, for shareholders to consider and, if thought fit, approve the proposed transaction.

The telco said the proposed transaction distributes operational risks, allowing MTN Group Fintech to share future capital risks, such as losses, regulatory burdens and execution risks.

In August 2024, MTN Nigeria acquired a 7.17 per cent stake held by Acxani Capital Limited in MoMo.

The acquisition increased MTN Nigeria’s total stake in MoMo to 100 per cent.

Continue Reading

Technology

Why Simplicity Now Beats Bigger Motion Suites

Published

on

image2video

Most people do not go looking for motion tools because they love software. They go looking because they already have an image that feels unfinished. It might be a portrait that needs movement, a product shot that needs more energy, or a still frame that needs to become a short social clip. That is why Image to Video AI stood out to me more than many broader video platforms. In this category, the real question is not whether AI can animate an image. The real question is whether it can do so in a way that feels understandable, practical, and repeatable.

69ecc6ead6bb9.webp

A lot of rankings in this space reward spectacle. They favor the system that produces the wildest sample or the most cinematic first impression. That can be fun, but it is not always helpful. In my testing, usefulness came from something less glamorous: how quickly a platform helped me move from a single still image to a result I could actually imagine publishing, refining, or repurposing. When I looked at seven well-known image-to-video platforms through that lens, Image2Video came out first, not because it tries to do everything, but because it keeps the path from idea to output unusually clear.

How I Judged Seven Image Motion Platforms

When I compare tools in this category, I try to judge them like working products rather than as isolated demos. A strong demo says very little about how a tool feels when you bring your own image, your own expectations, and your own creative uncertainty. What matters more is the relationship between control and friction.

Criteria That Matter Beyond Eye Catching Demos

My ranking focused on five practical questions. First, how easy is it to understand the workflow without guessing? Second, how much prompt effort is required before the tool starts producing usable motion? Third, does the platform feel tuned for people starting from a still image rather than for users building full video pipelines? Fourth, are the results good enough for short-form content, concept work, and presentation use? Fifth, does the system make me want to try again after an imperfect first result?

Workflow clarity shaped most of my ranking

That last point matters more than it sounds. Many AI tools can produce one exciting output. Fewer make the user feel oriented. If the interface or product logic is too expansive, the experience can become mentally heavy. In image-to-video creation, that heaviness often kills momentum. The best platform is frequently the one that removes hesitation and helps the user move while their idea is still fresh.

Seven Platforms That Deserve Serious Attention

There are more than seven tools in this market, but these are the seven that most clearly represent different approaches to image-to-video generation today. My ranking below is not a universal truth. It reflects the priorities above: clarity, accessibility, practical output, and how well each tool serves someone starting with a static image.

Rank Platform Best Fit Main Strength Main Tradeoff
1 Image2Video Fast image-to-video creation Clear workflow and low friction Short outputs require precise prompting
2 Runway Broader creative teams Strong ecosystem and creative range Can feel larger than necessary for simple tasks
3 Kling Motion quality seekers Often impressive movement and visual polish Can require more patience and experimentation
4 Pika Social-first creators Fast, playful, accessible generation Less focused on disciplined image-first workflows
5 PixVerse Quick visual experimentation Easy short-form energy and stylized results Output direction can feel less predictable
6 Luma Dream Machine Visual concept development Strong mood and cinematic ambition Not always the simplest path for basic use cases
7 Hailuo AI Curious testers and creatives Interesting generative behavior and variety Results can vary more from prompt to prompt

The list becomes more useful when you stop asking which platform is the most powerful and start asking which one best matches your immediate job. A big creative suite is not automatically better than a focused workflow. Sometimes it is the opposite.

Why Image2Video Comes First In Daily Use

Image2Video ranks first for me because its public structure aligns with what many users actually need. A lot of people arriving at an image-to-video tool are not trying to build a long-form production pipeline. They are trying to animate one image well enough to test an idea, communicate a concept, or publish a short clip. The platform appears to understand that mindset.

A focused product usually wastes less energy

In practice, a focused product often beats a feature-dense one because it reduces decision fatigue. Instead of pushing the user into a larger ecosystem before they know what they want, Image2Video emphasizes a straightforward sequence. That matters. It keeps attention on the source image, the intended motion, and the resulting clip rather than on the surrounding machinery.

69ecc791cf913.webp ​​​​​​​

The official path stays short and understandable

Based on the public workflow on the site, the process is simple:

  1. Upload an image in a standard format such as JPEG, JPG, or PNG.
  2. Enter a prompt describing the movement, animation, or camera behavior you want.
  3. Let the system process the request.
  4. Export the resulting video in MP4 format.

That sequence may sound almost too simple, but simplicity is part of the value. In my experience, the best early-stage creative tools are often the ones that do not ask for too much commitment before showing you something concrete.

How The Four Step Process Actually Feels

The official flow does more than save time. It shapes the psychology of use. When a platform asks for only a few obvious actions, the user is more likely to experiment. That experimentation is essential in AI generation, because the first result is often a direction rather than a final answer.

Uploading and prompting are the real turning point

The upload step is not merely technical. It defines the quality ceiling of the whole attempt. A clear source image gives the model a stronger foundation. Then the prompt becomes the bridge between stillness and motion. In my tests, the best prompts were not long essays. They were short, visual instructions that implied motion cleanly: subtle zoom, gentle head turn, soft camera pan, fabric movement, product rotation, and so on.

Processing time matters less than output direction

The site indicates that processing may take a few minutes, and that feels reasonable for this category. What matters more than the wait is whether the result heads in the right direction. A fast wrong answer is not especially useful. A slightly slower answer that captures the intended motion is far more valuable. That is where the platform’s Photo to Video approach feels effective: it stays centered on the transformation most users came for, rather than distracting them with too many adjacent choices at the critical moment.

Where The Platform Still Requires Patience

No honest review of an AI generator should pretend the system will perfectly interpret every prompt on the first try. Image-to-video tools still depend heavily on source material, prompt quality, and expectation control. Image2Video is no exception.

Short clips reward better prompt discipline

The platform’s short-form orientation is both a strength and a limitation. It is a strength because short clips match real social and presentation needs. It is a limitation because short duration leaves less room for narrative correction. If the movement direction is off, the whole clip can feel wrong quickly. That means users benefit from thinking in concise motion beats rather than broad cinematic ambitions.

Regeneration remains part of the creative routine

This is not a weakness unique to one platform. It is a category reality. In many cases, the first generation is a draft. The second or third attempt is where intent starts to align with output. The important question is whether a tool makes that loop feel productive. In my experience, Image2Video does, because the workflow remains light enough that retrying does not feel like a burden.

69ecc80b605d8.webp ​​​​​​​

Who Should Choose Which Tool First

The best platform always depends on the type of work you are actually doing. Ranking is useful only if it helps real people choose more efficiently. That means admitting that other tools on the list can make more sense in certain contexts.

Different creators need different types of control

If you need a larger creative environment with broader editing ambitions, Runway may be a more natural fit. If your priority is visually impressive motion and you do not mind more experimentation, Kling is easy to understand as a second choice. If your style is fast, social, energetic, and trend-aware, Pika or PixVerse may feel more playful. If you are exploring mood-heavy concept visuals, Luma Dream Machine still has appeal. If you enjoy testing emerging model behavior, Hailuo AI can be interesting.

The best choice depends on your starting asset

Still, if your starting point is simple and concrete, one image and one desired motion, Image2Video remains the most convincing first stop in this group. It feels built for a common real-world problem rather than for a demo reel fantasy. That distinction matters. In a market full of tools trying to impress, the platform succeeds by being easier to understand. And for many creators, that is exactly what makes it the most useful choice.

Continue Reading

Trending