Google gemini

Google seems to be the reluctant late entrant to the generative AI party. Although it toyed with the idea with Bard (it’s still called an AI experiment), it’s with the much-hyped Gemini that Google is jumping in with both feet. Jeff Dean, Chief Scientist at Google Deepmind, the team at Google responsible for the development of Gemini, tweeted on Dec 6th announcing the formal launch. He went on to say “It’s one of the largest science and engineering efforts we’ve ever undertaken.”

The company emphasizes Gemini’s focus on science, math, and programming capabilities, and the  platform’s multi-modal functionalities – the ability of a Generative AI model to process text, image, audio, video, computer codes, and so on.

The real buzz is around Gemini outperforming human experts on the much-studied exam model MMLU (Massive Multitask Language Understanding) across a staggering 57 subjects by scoring over 90%. The first GenAI model to achieve this feat!

Among other benchmarks, it’s claimed to have outperformed in 30 out of 32 modes – 10 of 12 text and reasoning benchmarks, 9 of 9 image understanding benchmarks, 6 of 6 video understanding benchmarks, and 5 of 5 speech recognition and speech translation benchmarks. That is a tremendous performance indeed, at least on paper. 

We also know that it comes in three distinct variants – Ultra, Pro, and Nano. While the Ultra is the all-inclusive top-of-the-line version built for complex processing, Nano is for users constricted by on-device computational limitations a.k.a normal everyday users most likely to use Gemini on mobile devices. We are suspecting the Pro version will be pegged somewhere in between.  

According to CNET’s report, Gemini took up a Physics question consisting of a diagram and complex equations to help with school homework. It was able to analyze the diagram, interpret it, and extract all the necessary information to further use the equations in the question to come up with the right answer.  Pretty impressive. We can’t wait to see how it fares against ChatGPT 4 and other Gen AI models given that the Ultra version isn’t expected to go live anytime soon. 

Business Hours as Local Search Ranking Factor

Recently there has been a lot of noise about how business hours are affecting local rankings. We were curious about this new entrant on whether and how this minor change to your business profile can affect the ranking and got busy trying various combinations on different devices and platforms – Google Maps and Search, to determine the actual impact.

So how does business hours affect local rankings? Let’s say someone searched for “Dentist in the Bay Area” at 6:00 a.m on a Monday and they opened only at 9:00 a.m. Earlier, the time of search didn’t matter. The business would still rank in the same position but would show as ‘Closed’. With the new update, they would rank lower at 6 a.m but get back to their original position after they open. Same with the week of the day. 

Several local SEO pros also got their hands dirty and after a few ‘trial and error’ testings, they all concluded the same. Darren Shaw from Whitespark initially tweeted about how he found that there were no changes in the rankings but retracted over the weekend to say that changes to the rankings were observed after changing the business hours across business categories. 

Although Google’s Danny Sullivan didn’t categorically confirm that the search engine implemented any changes to the local ranking algorithm, he went on to say that it isn’t a part of core updates. 

For now, all we can say is that it’s best to make sure that your actual business hours are updated on your Google Business Profile along with major directories so that customers can find you.

Generative AI in Local SEO –  One Year of ChatGPT

It’s been a year since Open AI launched the ground-breaking Chat GPT and the world has not been the same since. As local SEO professionals that cater to both agencies and local businesses’ needs, we are often asked by our clients how generative AI changed the way we function. They are particularly interested in the content creation bit. 

So we thought we would share some of our observations.

First thing’s first. No, we haven’t replaced our writers with Chat GPT! And we believe we won’t be doing that at least in the foreseeable future. When it all started, we thought AI might replace our writers. A year on, we are convinced that it’s far from the truth. 

Yes, GenAI does a lot of things well but we are not going to ask it “Write a 1000-word article on How to Create Awesome GBP Posts.”

Instead, we use AI for content ideation. So, our questions are more like “List out a few ways to create a post about an offer for Thanksgiving” ChatGPT Is an amazing and efficient tool for brainstorming. Our writers then take those ideas forward to add contexts, real-life examples, and so on to create the actual content that resonates with the audience. 

Sure we do use AI tools such as DALL-E, Adobe Firefly, and Midjourney for image creation. However, we take extra precautions to ensure the final content doesn’t infringe copyrights.

For creatives, what we found was each tool mentioned above has its own strengths. For example, Adobe uses images from its stock library so there’s no copyright issues if we need real-life images, say a team of professionals in an office. While Midjourney is really good at detailed images. For more generic ones, DALL-E can actually interpret generic prompts much better so it’s a lot quicker.

Apart from content creation, we deploy generative AI for a host of operations related tasks such as creating questionnaires to gather client requirements, analyzing vast quantities of textual content, and the likes. However, as with adapting any new technology or process, every new use case is put to rigorous testing until we are satisfied that AI can add value. 

Related Posts