Researchers discover how to do SEO for AI searches

Mondo Technology Updated on 2024-01-30

The researchers tested multiple ways to optimize AI searches** and discovered exactly how to improve viewability. They succeeded in increasing the visibility of small** with lower rankings by 115%, allowing them to outperform large enterprises** that typically dominate the top of search results.

Researchers from Princeton University, Georgia Institute of Technology, Allen Institute for Artificial Intelligence, and Indian Institute of Technology Delhi have observed that their generative engine optimization technology, known as GEO, is able to improve visibility by up to 40% overall.

Nine optimization techniques were tested in multiple areas of knowledge (e.g., law, history, science, etc.), and they found what worked, what didn't, and what methods actually made the rankings worse.

Of particular interest is the fact that some of these techniques are particularly effective for specific areas of knowledge, while three of these techniques are particularly effective on all types of **.

The researchers highlighted GEO's ability to democratize the top of search results, writing:

"This finding highlights the potential of GEOs as a tool for democratizing the digital space.

Importantly, many of the lower-ranking ** are often created by small content creators or independent businesses that have traditionally struggled to compete with larger companies that have the highest rankings in search engine results. ”

Researchers in PerplexityTests were conducted on the AI search engine and an AI search engine modeled after Bing Chat, and the results were found to be similar to those modeled after Bing Chat.

They observed in Section 6 of the study**:

"We found that, similar to our build engine, Quotation Addition performed best at position-adjusted word counts, with a 22% improvement from baseline. In addition, methods that perform well in our build engine, such as Citations**, Stats Add, show high improvements of up to 9% and 37% on both metrics. ”

The researchers tested their approach on a generative search engine they created, modeled on the Bing Chat workflow, and also on the AI search engine PerplexityAI on it.

They wrote:

"We describe a generation engine that includes several back-end generation models and a search engine for source retrieval.

The build engine (GE) takes the user query qu as input and returns a natural language response r, where the pu represents personalized user information such as preferences and history.

The build engine consists of two key components:

a.A set of generative models g = , each with a specific purpose, such as query refactoring or summarization, as well

b.The search engine se for a given query q returns a set of source s = .

We propose a representative workflow. At the time of writing, it is very similar to the design of Bingchat. The workflow breaks down the input query into a set of simpler queries that are more likely to be used by search engines. ”

The researchers created a benchmark from 9 different** with 10,000 search queries across multiple knowledge domains and varying levels of complexity. For example, some queries require inference to solve the answer.

Research ** explains:

“..We curated geo-bench, a benchmark of 10k queries from multiple **, repurposed to the build engine as well as synthesized queries. The benchmark includes queries from nine different domains, each further categorized based on its target domain, difficulty, query intent, and other dimensions. ”

Here's a list of nine search query sources:

1. ms macro,

2. orcas-1

3.Natural problems

4.allsouls: This dataset contains quiz questions from "All Souls College, University of Oxford".

5.Lima: Contains challenging questions that require the generation engine to not only aggregate information, but also perform appropriate actions to answer the reasoning of the question

6. d**inci-debtate

7. perplexity.AI Discover: These queries are from PerplexityAI's Discover section, which is an updated list of trending queries

8.eli-5: This dataset contains questions from the eli5 subreddit

9.GPT-4 generated queries: To complement the diversity of query distributions, we prompt GPT-4 to generate queries from different domains (e.g., science, history) based on query intent (e.g., navigation, transactional) and difficulty, as well as the range of generated responses (e.g., open-ended, fact-based).

The researchers tested nine different optimization methods, tracking how they work for different types of searches, such as law and science, people and society, health, history, and other topics.

They found that each niche theme responds well to different optimization strategies.

The nine testing strategies are:

Authority: Change the writing style to make the authority claim more persuasive.

Keyword optimization: Add more keywords from your search query.

Add statistics: Change existing content to include statistics instead of explanatory information.

Quote (Cited Reliable).

Citation Add: Add citations and citations from high-quality **.

Understandable: Make the content easier to understand.

Fluency optimization is to make the content clearer.

Unique words: Add words that are less widely used, rare, and unique, but don't change the meaning of the content.

Technical terms: This strategy adds unique technical terms where it makes sense and doesn't change the meaning of the content.

Cite**. Add to.

Statistical addition. Which methods work best?

The top three optimization strategies are:

Cite**. Add to.

Statistical addition. These three strategies achieved a relative improvement of 30-40% compared to the baseline.

The researchers describe the success of these strategies:

"These methods involve adding statistics to the content, including quotation additions, and including citation sources from reliable sources, with minimal changes to the actual content itself.

However, they significantly increase the visibility of the content in the build engine response, which enhances the credibility and richness of the content. ”

Fluency optimization and an easy-to-understand approach also help improve visibility by 15-30%.

The researchers interpreted the results to show how AI search engines evaluate content and how it is presented.

The researchers were surprised to find that using a persuasive and authoritative tone in content typically doesn't improve rankings for AI search engines, which is not the case with other methods.

Similarly, adding more keywords to the content in the search query doesn't work. In fact, keyword optimization performed 10% worse than the benchmark.

An interesting finding of the report is that which optimization works best depends on the field of knowledge (law, science, history, etc.).

They found that content related to historical domains ranked better when "authoritative" optimization (using more persuasive language) was applied.

Citation optimization improves content with authoritative citations that are significantly best for factual search queries.

Adding statistics is very effective for legal and related issues. Statistics are also valid for "opinion" questions, where searchers ask the AI what it thinks about something.

The researchers observed

"This suggests that incorporating data-driven evidence can improve visibility in specific contexts, especially in those contexts. ”

Adding citations works well for the areas of people and society, interpretation, and historical knowledge. The researchers interpret these results as a way to suggest that perhaps AI search engines prefer the "authenticity" and "depth" of such questions.

The researchers concluded that domain-specific optimization is the best approach.

The good news from this study is that typically lower-ranking ** will benefit from these optimization strategies for AI search engines.

They came to the conclusion that:

"Interestingly, it's often difficult to gain visibility with lower rankings in the SERPs, but benefit significantly more from GEOs than from higher rankings**.

For example, the cite sources method significantly increased the visibility of the fifth-ranked ** in the SERPs by 1151%, while the top ** saw an average decrease of 303%。

…The application of the GEO method provides an opportunity for these small content creators to significantly improve their visibility in the build engine response.

By using GEO to enhance content, they can reach a wider audience, creating a level playing field that allows them to compete more effectively with larger companies in the digital space. ”

This research provides a new SEO path for AI-based search engines. Those who say that AI search will beat SEO are saying it too soon. This study seems to suggest that SEO will eventually evolve into GEO in order to compete in the next generation of AI search engines.

Related Pages