Computers Windows Internet

The semantic core of the article is an example. How to build a semantic core from scratch? Services for compiling the semantic core

The semantic core is the basis for website promotion on the Web. Without it, it will not be possible to bring the site to the top for a long time. We will tell you what it is made of, where to look for what and what tools to use for this.

What is the semantic core

To simplify understanding, let's assume that semantic core(СЯ) - these are all those words, phrases and their variations that fully describe the content of your site. The more accurate and high-quality the core is assembled, the easier it is to promote the site.

Roughly speaking, this is one big long list of words and phrases (keywords) by which users search for similar goods and services. There are no general guidelines for kernel size, but there is one rule: the larger and better quality, the better. The main thing is not to artificially inflate the size in order to make the core larger. If you chase size at the expense of quality, all work will go down the drain - the kernel will not work.

Let's give an analogy. Imagine that you are the head of a large construction company who needs to build a lot of objects in a short time. You have an unlimited budget, but you need to hire at least a hundred people - this is the requirement of the union. Which one hundred people will you hire for such a responsible job - anybody at all, or will you carefully select, since the budget allows? And after all, whom you type - to build houses with those. It is reasonable to assume that you will choose carefully, because the result depends on it.

It's the same with the kernel. To make it work even on entry level, it would be nice if it had at least a hundred keys. And if you write anything into the core, just a little more, the result will be guaranteed to fail.

General rules for constructing a semantic core

One request - one page. You need to understand which one page you need to send the user to for each request. You cannot do so that there are several pages per request: internal competition arises and the quality of promotion drops sharply.

The user receives predictable content upon his request. If a customer is looking for shipping methods to their region, do not submit them to the home page of the site if they are not there. Sometimes it happens that after compiling the core, it becomes clear that you need to create new pages for search queries. This is normal and common practice.

The core contains all types of requests (HF, MF, LF). About the frequency - below. Just keep this rule in mind as you read on. In simple terms, you have to distribute these requests to specific pages on your site.

An example of a kernel distribution table for site pages.

Kernel collection methods

Wrong: copy from competitors

A way when there is no time and money, but the core needs to be somehow assembled. We find several of our direct competitors, the cooler the better, and then use, for example, spywords.ru to get a list of keys. We do this with everyone, combine requests, throw out duplicates - and we get a base from which you can already somehow build on.

The disadvantages of this approach are obvious: not the fact that you need to promote for the same queries; parsing and tidying up such a kernel can take a lot of time.

Sometimes it happens that even identical competitors have their own specifics in their queries, which they take into account, but you do not. Or they focus on one thing, and you do not do it at all - the keys work into the void and lower the rating.

On the other hand, it takes a lot of time, effort, and sometimes money to pay for such work to bring such a base back to normal. When you start counting the economy (and this should always be done in marketing), you often realize that the costs of creating your own core from scratch will be the same, or even less.

We do not recommend using this method unless you have a complete disaster with your project and need to start somehow. Anyway, after the launch, you will have to redo almost everything, and the work will be useless.

Correct: make your own semantic core from scratch

To do this, we fully study the site, we understand what audience we want to attract, with what problems, requirements and questions. We think about how they will look for us, correlate it with the target audience, adjust the goals, if necessary.

Such work takes a lot of time, it is unrealistic to do everything in a day. In our experience, the minimum time to collect a kernel is a week, provided that a person will work full time only on this project. Remember that the semantic core is the foundation of promotion. The more accurately we compose it, the easier it will be at all other stages.

There is one danger that newbies forget about. The semantic core is not something that is done once and for all life. We are constantly working on it, changing the business, requests and keywords. Something disappears, something becomes obsolete, and all this must be immediately reflected in the core. This does not mean that you can do it badly at first, since then you can finish it anyway. This means that the more accurate the kernel, the faster you can make changes to it.

Such work is initially expensive, even within the company (if you do not order SN from an external company), because it requires qualifications, an understanding of how search works, and full immersion in the project. The kernel cannot be dealt with in free time; it should become the main task of an employee or department.

Search rate shows how often a given word or phrase is searched per month. There are no formal criteria for frequency separation, it all depends on the industry and profile.

For example, the phrase “buy a phone on credit” has 7764 queries per month. For the phone market, this is a mid-range request. There is something that is asked much more often: “buy a phone” - more than a million requests, a high-frequency request. And there is something that is asked much less often: "buy a phone on credit via the Internet" - only 584 queries, low-frequency.

And the phrase "buy a drilling rig" has only 577 hits, but is considered a high-frequency query. This is even less than the low-frequency query from our previous example. Why is that?

The fact is that the market for telephones and drilling rigs by piece size differs thousands of times. And the number of potential customers differs by the same amount. Therefore, what is a lot for some is very little for others. You should always look at the market size and know the approximate total number of potential customers in the region where you work.

Division of requests by relative frequency per month

High frequency. They need to be included in the meta tags of every page of the site in general, and used for general site promotion. It is extremely difficult to compete on RF requests, it's easier to just be “in trend” - it's free. Include them in the kernel anyway.

Medium frequency. These are the same high-frequency ones, but formulated a little more precisely. There is not such tough competition for them in the contextual advertising block as with HF, so they can already be used for promotion for money, if the budget allows. Such requests can already drive targeted traffic to your site.

Low frequency. The workhorse of promotion. It is the low-frequency requests that provide the bulk of the traffic when properly configured. You can freely advertise using them, optimize website pages for them, or even make new ones, if you can't do without it. Good SN consists of about 3/4 of such requests and is constantly expanding at the expense of them.

Ultra-low frequency. The rarest, but most specific requests, for example, "buy a phone at night in Tver on credit." Very few people work with them when compiling, so there is practically no competition. They have a drawback - they are really very rarely asked, and they take as much time as the rest. Therefore, it makes sense to deal with them when all the main work has already been done.

Types of requests depending on the purpose

Informational. They are used to learn something new or to get information on a topic. For example: "how to choose a banquet hall" or "what kind of laptops are there." All such requests should lead to informational sections: blog, news or collections on topics. If you see that a lot of information requests are being typed, and there is nothing to close them on the site, then this is a reason to make new sections, pages or articles.

Transactional. Transaction = Action. Buy, sell, change, receive, deliver, order and so on. Most often, such requests are closed by pages of specific products or services. If you have most of your transactional questions in high- or mid-range, reduce the frequency and refine the queries. This will allow you to accurately direct people to the right pages, and not leave them on the main page without specifics.

Others. Requests without a pronounced intent or action. "Beautiful balls" or "modeling of clay crafts" - about them it is impossible to say specifically why the person asked this. Maybe he wants to buy it. Or learn technology. Or read more on how to do it. Or he needs someone to do it for him. Unclear. With such requests, you need to work carefully and carefully clean out of trash keys.

To promote a commercial site, basically, you need to use transactional queries, and avoid informational queries - for them the search engine shows information portals, wikipedia, aggregator sites. And it is almost impossible to compete with them in promotion.

Trash keys

Sometimes the queries come up with words or phrases that are not relevant to your industry, or you just don't do it. For example, if you only make souvenirs from conifers, you probably don’t need the query “bamboo souvenirs”. It turns out that "bamboo" is a garbage element that clogs up the core and interferes with the purity of the search.

We collect such keys in a separate list, they will be useful to us for contextual advertising. We indicate them as something that you do not need to look for, and then our site will be in the search results for the request "souvenirs from a pine tree", but not for the request "souvenirs from bamboo".

We do the same throughout the kernel - we find what is not related to the profile, remove it from the SY and enter it into a separate list.

Each request consists of three parts: a qualifier, a body, and a tail.

The general principle is as follows: the body specifies the subject of the search, the specifier specifies what needs to be done with this subject, and the tail specifies the entire query.

By combining different qualifiers and query tails, you can get many keywords that fit into the core.

Build a kernel step by step from scratch

The very first thing you can do is to look through all the pages of your site and write out all the names of the goods and stable phrases of product groups. To do this, look at the titles of categories, sections and main characteristics. We will record everything in Excel, it will come in handy in the next stages.

For example, if we have a stationery store, we get the following:

Then we add characteristics to each request - we build up the "tail". To do this, we find out what properties these products have, what else can we say about them, and write them down in a separate column:

After that, we add "specifiers": action verbs that relate to our topic. If, for example, you have a store, then it will be "buy", "order", "in stock" and so on.

We collect individual phrases from this in Excel:

Collecting extensions

Let's take a look at three typical tools for collecting the kernel - two free and a paid one.

Free. We drive our phrase into it, we get a list of what looks like our request. We carefully look at it and choose what suits us. So we run everything that we got at the first stage. The work is long and tedious.

As a result, you will have a semantic core that reflects the content of your site as accurately as possible. You can already fully work with it further when promoting.

When searching for words, be guided by the region where you sell a product or service. If you do not work all over Russia, switch to the "by region" mode (just below the search bar). This will give you an accurate picture of the queries in the location you want.

Consider your request history. Demand is not static, which many people forget. For example, if you search for the query “buy flowers” ​​at the end of January, it might seem that almost no one is interested in flowers - only a hundred or two queries. But if you search for this in early March, the picture is completely different: thousands of users are looking for this. Therefore, remember about the seasonality.

Also free, it helps to find and select keywords, predict queries and provides performance statistics.

Key Collector. The program is a real harvester that can do 90% of all the work of collecting the semantic core. But paid - almost 2,000 rubles. Searches for clues from many sources, looks at ratings and queries, and collects core analytics.

The main features of the program:

collection of key phrases;

determining the value and value of phrases;

identification of relevant pages;

Everything she knows can be done for free using several free analogs, but it will take many times longer. Automation is the strong point of this program.

As a result, you get not only a semantic core, but also full analytics and recommendations for improvement.

Removing trash keys

Now we need to clean up our kernel to make it even more efficient. To do this, we use the Key Collector (it will do this automatically), or we are looking for garbage manually in the Excel. At this stage, we need the list of unnecessary, harmful or unnecessary requests that we compiled earlier.

Trash and key removal can be automated

Grouping requests

Now, after collecting, all found queries need to be grouped. This is done so that keywords that are close to each other in meaning can be attributed to the same page, and not blurred in different ways.

To do this, we combine requests that are similar in meaning, the answers to which are given to us by the same page, and next to them we write where they refer. If there is no such page, but there are a lot of requests in the group, most likely it makes sense to create a new page or even a section on the site, where to send everyone for such requests.

An example of grouping, again, can be seen in our worksheet.

Use every automation software you can reach. This saves a lot of time on building the kernel.

Do not collect informational and transactional queries on one page.

The more low-frequency queries in the texts, the better. But do not get carried away, do not turn the text on the site into something that only a robot can understand. Remember that real people will read you too.

Periodically clean and update the kernel. Make sure that the information in the semantic core is always up-to-date and reflects the current position. Otherwise, you will spend money on what you cannot end up giving to your customers.

Remember the benefits. In your pursuit of search traffic, remember that people come from different sources and stay where they are interested. If you have an up-to-date kernel all the time and at the same time the text on the pages is written in a human, understandable and interesting language- you are doing everything right.

Finally - once again the kernel construction algorithm itself:

1.find all keywords and phrases

2.clean them from junk requests

3. we group the requests by meaning and compare them with the pages of the site.

Do you want to start promoting your site, but understand that it takes a long time to collect the semantic core? Or don't you want to understand all the nuances, but just get the result? Write to, and we will select the best option for promoting your website for you.

An article on how to compose the semantic core on your own so that your online store is in the first positions in the search results search engines... Selection process keywords- not so simple. It will take care and a relatively long time. But if you are ready to move forward and grow your business, this article is for you.It goes into detail about methods of collecting keywords, as well as which tools can help you with this.

The answer is banal - for the site to "fall in love" with search engines. And so that when users request for specific keywords, it is your resource that is given out.

And the formation of the semantic core is the first, but very important and confident step on the way to the goal!

The next step is to create a kind of skeleton, which implies distributed selected "keys" on certain pages of the site. And only after that should you move to a new level - writing and implementing articles, tags.

Note that the network contains several options for defining the concept of the semantic core (hereinafter referred to as SN).

In general, they are similar and if you summarize everything, then you can form the following: a set of keywords (as well as related phrases and forms) for website promotion. Such words accurately characterize the focus of the site, reflect the interests of users and correspond to the activities of the company.

Our article provides an example of the formation of a CY for an online bedding store. The whole process is divided into five sequential steps.

1) Collecting basic queries

In this case, we are talking about all the phrases that will correspond to the direction of the store's activities. Therefore, it is so important to think over as accurately as possible those phrases that best characterize the goods presented in the catalog.

Of course, this is sometimes difficult to do. But the right column Wordstat.Yandex will come to the rescue - it contains phrases that are most often entered by users when using the phrase you have chosen.

Watch the video on working with Wordstat (only 13 minutes)

In order to get the results, enter the desired phrase in the service line and click on the "Select" button.

In order not to copy all requests manually, we recommend using the Wordstat Helper extension, created specifically for browsers. Mozilla Firefox and Google chrome... This addition will greatly simplify the work with the selection of words. How it works - see the screenshot below.

Save the selected words in a separate document. Then brainstorm and add the phrases that you come up with.

2) How to expand the SA: three options

The first step is relatively straightforward. Although it will require attentiveness from you. But the second is active brain activity. After all, each separately selected phrase is the basis of the future group search queries on which you will be promoted.

To collect such a group, you must use:

  • synonyms;
  • paraphrasing.

In order not to "download" at this stage, use special applications or services. How to do this is described in detail below.

How to expand your SEO with Google Keyword Planner

We go to the thematic chapter (called the Keyword Planner) and picks those phrases that most accurately characterize the group of queries you are interested in. Do not touch other parameters and click on the "Get ..." button.

After that, just download the results.

How to extend SN using Serpstat (ex. Prodvigator)

You can also use another similar service that conducts competitor analysis. After all, competitors are the best place to get the keywords you need.

Serpstat service (ex. Prodvigator) allows you to determine exactly what key queries your competitors were using to become the leaders of search engines. Although there are other services - decide for yourself which one to use.

In order to select search queries, you need:

  • enter one request;
  • indicate the region of promotion you are interested in;
  • click on the "Search" button;
  • and when it finishes, select the "Search queries" option.


After that, click on the "Export Table" button.

How to compose the semantic core: how to expand the SN with the Key Collector / Slovoyob

Do you have a large store with a huge amount of products? In such a situation, you need a service Key Collector.

Although if you are just starting to learn the science of selecting keywords and forming a semantic core, we recommend that you pay attention to another service - with a dissonant name Slovoeb ... Its advantage is that it is completely free.

Download the application, go to Yandex.Direct settings and enter your username / password from mailbox Yandex.

After that:

  • open a new project;
  • click on the Data tab;
  • there click on the Add phrases option;
  • indicate the region of promotion you are interested in;
  • enter the queries that were generated earlier.

After that, start collecting the SN from Wordstat.Yandex. For this:

  • go to the "Data collection" section;
  • then - you need to select the section "Batch collection of words from the left column";
  • a new window will appear on the screen in front of you;
  • in it - do as shown in the screenshot below;


Note that Key Collector is an excellent tool for large, large projects and with its help it is easy to organize the collection of statistical data on services that analyze the "work" of competing sites. For example, these services include the following: SEMrush, SpyWords, Serpstat (ex. Prodvigator) and many others.

3) Delete unnecessary "keys"

So, the base has been formed. The volume of collected "keys" is more than solid. But if you analyze them (in this case, just read them carefully), you will find out that not all of the collected words correspond exactly to the theme of your store. That is why “non-target” users will enter the site using them.

Such words should be deleted.

Here's another example. So, on the site you sell bedding, but in your assortment there is simply no fabric from which such underwear can be sewn. Therefore, everything related to fabrics must be removed.

By the way, a complete list of such words will have to be formed manually. No "automation" will help here. Naturally, it will take a relatively long time and in order not to miss anything, we recommend that you arrange a full-fledged brainstorming session.

Let's note the following types and types of words that will be irrelevant for online stores:

  • name and mention of competing stores;
  • cities and regions where you do not work and where you do not supply goods;
  • all words and phrases containing "free", "old" or "used", "download", etc .;
  • the name of a brand that is not represented in your store;
  • "Keys" in which there are errors;
  • repeating words.

Now we will tell you how to delete all the words you don't need.

Form a list

Open the Slovoeb service, select the "Data" section in it, and then go to the "Stop Words" tab and "drive" the manually selected words into it. It is interesting that you can write down words either manually or simply upload a file with them (if you have prepared one).


Thus, you will be able to quickly eliminate from your list stop words that do not correspond to the subject matter or the peculiarities of the store.

How to build a semantic core: a quick filter

You have received a kind of syllabus. Analyze it carefully and start manually deleting unnecessary words. The same Slovoeb service will help you optimize the solution to this problem. Here is the sequence of steps you need to follow:

  • take the first unnecessary word from your list, for example, let it be the city of Kiev;
  • drive it into the search (on the screen - number 1);
  • mark the corresponding lines;
  • clicking on them right click mouse, delete;
  • press Enter in the search field to return to the original list.

Repeat the above steps as many times as necessary until you have revisited the largest word list possible.

4) How to compose a semantic core: group requests

In order to understand how to carry out the distribution of words on specific pages, you should group all the queries you have selected. For this, the so-called semantic clusters should be formed.

This concept means a group of “keys” similar in subject matter and meaning, which is formed in the form of a multilevel structure. Let's say the first-level cluster is the search query "bedding". But the second-level clusters will be search queries "blankets", "blankets" and the like.

In most cases, the definition of clusters is carried out by brainstorming. But it is important to be well versed in the assortment, features of your product, but also take into account the way in which the structure of competitors is built.

The next thing that you need to pay special attention to is that on the last level of the cluster there should be only those requests that exactly correspond to the only need of potential customers. That is, a specific type of goods.

Here, the same Wordoeb service and the Quick filter option described above will come to your aid again. It will help you sort your search queries into specific categories.

To do this sort of sort, you need to do several simple steps... First, in the search bar of the service, enter the keyword that will be used in the name:

  • categories;
  • landing page, etc.

For example, it could be a brand of bedding. In the results obtained, mark the phrases that suit you and copy.

Those phrases that you do not need, just select with the right mouse button and delete.


On the right side of the service menu, create a new group, naming it appropriately. For example, the brand name.

To transfer your selected phrases to this part of the tab, you must select the Data line and click on the Add phrases caption. For more details, see the screen.

Pressing Enter in the search box will return you to the original word list. Follow the described procedure for all other requests.

The system will display all selected phrases in alphabetical order, which makes it easier to work with them - you can easily determine what exactly can be deleted. Or, you can group words into a specific group.

We add that manual grouping also takes a fair amount of time. Especially when it comes to too many key phrases. Therefore, we recommend using automated paid programs. These include:

  • Key Collector;
  • Rush-Analytics;
  • Just-Magic and others.

There is also a completely free script Devaka.ru. By the way, please note that you often have to combine some types of queries.

Since there is no point in piling up a huge number of categories on the site, differing only in such names as "Beautiful bedding" and "Fashionable bedding".

To determine the importance of each individual key phrase for a particular category, you just need to transfer them to the Google planner, as shown in the screenshot.

Thus, you can determine how much a particular search query is in demand. All of them can be divided into three categories, depending on the particular use:

  • high-frequency;
  • low frequency;
  • mid-frequency;
  • and even micro-low-frequency ones.

However, it is important to understand that there are no exact numbers that indicate the belonging of a request to a particular group. Here you should focus on the topic of both the site itself and the request. In a separate case, a request with a frequency of up to 800 per month can be considered a low-frequency one. In another situation, a request with a frequency of up to 150 will be high-frequency.

The most high-frequency queries from all the selected ones will subsequently be entered into tags. But the lowest frequencies are recommended to be used in order to optimize specific store pages for them. Since there will be low competition among such queries, it will be enough to simply fill such subsections with high-quality text descriptions so that the page is in the forefront of search results.

All of the above actions will allow you to form a clear structure in which you will have:

  • all the necessary and important categories - to make a visualization of the "skeleton" of your store, use the additional service XMind;
  • landing pages;
  • pages that provide information that is important for the user - for example, with contact information, with a description of delivery conditions, etc.

How to extend the semantic core: an alternative method

With the development of the site, the expansion of the store, the CY will also increase. For this, it is necessary to monitor and collect key phrases within each group. This greatly simplifies and speeds up the process of expanding the SA.

To collect similar queries, for hints, use additional services, including:

  • Serpstat (ex. Prodvigator);
  • Ubersuggest;
  • Keyword Tool;
  • and others.

The screenshot below shows how to use the Promoter service.

How to compose a semantic core: what to do after going through our instructions

So, in order to independently form the SN for an online store, you need to perform a number of sequential steps.

It all starts with the selection of keywords that can only be used when searching for your products and which will subsequently become the main group of queries. Further, using the tools of search engines to expand the semantic core. It is also recommended to conduct an analysis of competing sites for this.

The next steps will be like this:

  • analysis of all selected search queries;
  • removal of requests that do not correspond to the meaning of your store;
  • grouping of requests;
  • formation of the site structure;
  • constant tracking of search queries and expansion of the SJ.

The method for selecting a SN for an online store presented in this article is far from the only correct and correct one. There are others. But we have tried to present you the most convenient way.

Naturally, such indicators as the quality of text descriptions, articles, tags, store structure are also important for promotion. But we will talk about this in a separate article.

In order not to miss new and useful articles, be sure to subscribe to our newsletter!

You are not undergoing training yet,? Sign up right now and in 4 days you will have your own website.

If you can't make it yourself, we'll make it for you!

(11 )

In this post, we will describe the complete algorithm for collecting the semantic core mainly for an information site, but this approach can be applied to commercial sites as well.

Initial semantics and creation of the site structure

Preparing words for parsing and initial site structure

Before we can parse words, we need to know them. Therefore, we need to draw up the initial structure of our site and the initial words for parsing (they are also called markers).

You can see the original structure and words:

1. Using logic, words from the head (if you understand the topic).
2. Your competitors, whom you analyzed when choosing niches or entering your main query.
3. From Wikipedia. It usually looks like this:

4. We look at wordstat for your main queries and the right column.
5. Other subject books and reference books.

For example, the theme of our site is heart disease. It is clear that we must have all heart diseases in our structure.

You cannot do without a medical reference. I would not look at competitors, because they may not have all the diseases, most likely they did not have time to cover them.

And your initial words for parsing will be exactly all heart diseases, and already based on the keys that we will parse, you will build the structure of the site when you start grouping them.

In addition, you can take all the medications for the treatment of the heart, like the expansion of the topic, etc. You look at Wikipedia, headings of competitors on the site, wordstat, think logically and in this way find more marker words that you will parse.

Site structure

You can look at competitors for general information, but you don't always have to make a structure like theirs. You should proceed more from your logic target audience, they also enter the queries that you parse from the search engines.

For example, how to proceed? List all heart diseases, and from them already lead symptoms, treatment. Or, nevertheless, to make headings of symptoms, treatment, and from them already lead diseases. These questions are usually addressed by grouping keywords based on search engine data. But not always, sometimes you have to make your own choices and decide how to make the structure the best, because queries can overlap.

You should always remember that the structure is created throughout the collection of semantics and sometimes in its original form it consists of several headings, and with further grouping and collection it expands, as you begin to see queries and logic. And sometimes you will be able to compose it and not parsing keywords right away, because you know the topic well or it is well represented by competitors. There is no system for drawing up the structure of the site, you can say this is your personal work.

The structure can be your individual (different from competitors), but it must necessarily be convenient for people, correspond to their logic, and therefore the logic of search engines and such that it is possible to cover all the thematic words in your niche. It should be the best and most comfortable!

Think ahead. It happens that you take a niche, and then you want to expand it, and you start changing the structure of the entire site. And the created structure on the site is very difficult and dreary to change. Ideally, you will need to change the attachment urls and re-paste all this on the site itself. In short, what a tedious and very demanding job is a tin, so immediately decide definitively in a man's way, what and how you should have it!

If you are very new to the topic of the site being created and do not know how the structure will be built, you do not know which initial words for parsing to take, then you can swap the 1st and 2nd stages of collection. That is, first parse competitors (we will analyze how to parse them below), look at their keys, based on this, compose a structure and initial words for parsing, and then parse wordstat, hints, etc.

To compose the structure, I use the mind manager - Xmind. It's free and has all the essentials.

A simple structure looks like this:


This is the structure of a commercial site. Usually, information sites do not have intersections and any filters of product cards. But this structure is not complicated either, it was compiled for the client so that he understands. Usually my structures consist of many arrows and intersections, comments - only I myself can figure out such a structure.

Is it possible to create semantics while filling the site?

If the semantics are easy, you are confident in the topic and know it, then you can do semantics in parallel with filling the site. But the initial structure must be thrown over without fail. I myself sometimes practice this in very narrow niches or in very wide ones, so as not to spend a lot of time collecting semantics, but to launch the site right away, but still I would not recommend doing this. The probability of errors is very high if you have no experience. It's easier when all the semantics are ready, the whole structure is ready, and everything is ungrouped and understandable. In addition, in the ready-made semantics, you see which keys should be prioritized, which do not have competition and will bring more visitors.

Here you also need to push away from the size of the site, if the niche is wide, then there is no point in collecting semantics, it is better to do it along the way, because collecting semantics can take a month or more.

So we threw on the structure initially or did not throw it on, we decided to go the second stage. We have a list of starting words or phrases on our topic that we can start parsing.

Parsing and working in keycollector

For parsing, of course I use keycollector. I will not dwell on setting up keycollectora, you can read the help of this program or find articles on setting up on the Internet, there are a lot of them and everything is detailed there.

When choosing sources of parsing, you should calculate your labor costs and their effectiveness. For example, if you parse the Pastukhov or MOAB database, then you will bury yourself in a heap of garbage requests that will need to be sifted out, and this time. And in my opinion, it's not worth it to find a couple of queries. On the topic of databases, there is a very interesting research from RushAnalytics, of course they praise themselves there, but if you do not pay attention to this, very interesting data on the percentage of bad keywords http://www.rush-analytics.ru/blog/analytica-istochnikov -semantiki

At the first stage, I scrape wordstat, adwords, their hints and use the Bukvarix keyword database (the desktop version is free). I also looked through the tips from Youtube manually before. But recently, keycollector added the ability to parse them, which is awesome. If you are a complete pervert, you can add other keyword bases here.

Start parsing and off you go.

Cleaning up the semantic core for an information site

We parsed the queries and we got a list of different words. It certainly contains the right words, as well as trash - empty, not thematic, not relevant, etc. Therefore, they need to be cleaned.

I do not delete unnecessary words, but move them into groups, because:

  1. In the future, they can become food for thought and become relevant.
  2. We exclude the possibility of accidental deletion of words.
  3. When parsing or adding new phrases, they will not be added if you check the box.


I sometimes forgot to put it, so I set up parsing in one group and parse the keys only in it, so that the collection is not duplicated:


You can work this way or the way it suits you.

Collecting frequencies

Collect all words through direct, base frequency [W] and exact [“! W”].


We collect everything that is not collected through wordstat.

Cleaning odd words and no format

We filter by one-word words, look at them and remove unnecessary ones. There are one-word words for which it makes no sense to move, they are not unambiguous or duplicate another one-word query.


For example, we have a theme - heart disease. According to the word “heart”, there is no point in advancing, it is not clear what the person means - this is a too broad and ambiguous request.

We also look at which words the frequency was not collected - this is either the words contain special characters, or there are more than 7 words in the query. We transfer them to non-format. It is unlikely that such requests are entered by people.

General and precise frequency cleaning

All words with a total frequency [W] from 0 to 1 are removed.

I also remove everything from 0 to 1 at the exact frequency ["! W"].

I distribute them to different groups.

Further in these words you can find normal logical keywords. If the kernel is small, then you can immediately manually revise all the words with zero frequency and leave, which you think people are entering. This will help to cover the topic completely and it is possible that people will follow these words. But naturally, these words should be used last, because according to them big traffic definitely won't.

The value from 0 to 1 is also taken on the basis of the topic, if there are a lot of keywords, then you can filter from 0 to 10. That is, it all depends on the breadth of your topic and your preferences.

Scrubbing by coverage

The theory is as follows: for example, there is a word - "forum", its base frequency is 8,136,416, and the exact frequency is 24,377, as we can see the difference is more than 300 times. Therefore, we can assume that this request is empty, it includes a lot of tails.

Therefore, by all accounts, I expect this KEI:

Accurate Frequency / Base Frequency * 100% = Coverage

The lower the percentage, the more likely the word is empty.

In KeyCollector, this formula looks like this:

YandexWordstatQuotePointFreq / (YandexWordstatBaseFreq + 0.01) * 100

Here, too, everything depends on the subject matter and the number of phrases in the core, so you can remove the coverage less than 5%. And where the core is large, then you can not take even 10-30%.

Implicit duplicate cleaning

To clean up implicit duplicates, we need to collect Adwords frequency by them and navigate by it, because it takes into account the word order. We save resources, so we will collect this indicator not from the entire core, but only from duplicates.


In this way, we found and marked all non-obvious duplicates. Close the tab - Implicit duplicate analysis. They checked in with us in working group... Now we will display only them, because the parameters are removed only for those phrases that are shown in the group at the moment. And only then we start parsing.


We are waiting for Adwords to take the indicators and go into the analysis of implicit duplicates.


We set these parameters for a smart group mark and press - perform a smart check. In this way, in our group of duplicates, only the highest-frequency requests for Adwords will not be marked.

It is better to run all takes, of course, and look by hand, suddenly there is something wrong. Pay special attention to groups where there are no frequency indicators, where the takes are noted randomly.

Everything that you mark in the analysis of implicit groups is also stated in the working group. So after the analysis is complete, simply close the tab and transfer all marked implicit duplicates to the appropriate folder.

Stop word cleaning

I also divide the stop words into groups. I list the cities separately. They may come in handy in the future if we decide to create a directory of organizations.

Separately, I enter the words containing the words photo, video. Suddenly they will come in handy someday.

And also, "vital queries", for example, Wikipedia, I include the forum here, as well as in the medical topic this may include - malysheva, mosquitoes, etc.

It all also depends on the topic. You can also make separate commercial inquiries - price, buy, shop.

It turns out this is a list of groups by stop words:

Cleaning up wrapped words

This applies to competitive topics, competitors often cheat them in order to mislead you. Therefore, it is necessary to collect seasonality and weed out all words with a median equal to 0.

And also, you can look at the ratio of the base frequency to the average, a large difference may also indicate a cheating request.

But one must understand that these indicators may also indicate that these are new words for which statistics have only recently appeared or they are simply seasonal.

Geo cleaning

Usually, a geo check for informational sites is not required, but just in case, I will sign this point.

If there is any doubt that some of the requests are geo-dependent, then it is better to check this through the collection of Rookee, although he sometimes makes mistakes, but much less often than checking this parameter on Yandex. Then, after collecting Rookee, it is worth checking all the words manually that were indicated as geo-dependent.

Manual cleaning

Now our core has become several times smaller. We revise it manually and remove unnecessary phrases.

At the output, we get the following groups of our kernel:

Yellow - it is worth digging, you can find words for the future.

Orange - may come in handy if we expand the site with new services.

Red - not useful.

Analysis of competition requests for information sites

Having collected the requests and cleaned them, now we need to check their competition in order to understand in the future - which requests should be dealt with in the first place.

Competition for the number of documents, title, main pages

All this can be easily removed through the KEI in the KeyCollector.


We get data for each request, how many documents were found in the search engine, in our example in Yandex. How many main pages are in the results for this request and how many occurrences of the request in the header.

On the Internet, you can find various formulas for calculating these indicators, even it seems that in the freshly installed KeyCollector, some kind of KEI calculation formula is built in according to the standard. But I don't follow them, because you have to understand that each of these factors has a different weight. For example, the most important is the presence of the main pages in the search results, then the headings and the number of documents. It is unlikely that this importance of factors, as it can be taken into account in the formula, and if it is still possible, then one cannot do without a mathematician, but then this formula will not be able to fit into the capabilities of KeyCollector.

Competition on link exchanges

It's more interesting here. Each exchange has its own algorithms for calculating competition, and it can be assumed that they take into account not only the presence of main pages in the search results, but also the age of the pages, link mass and other parameters. Basically, these exchanges, of course, are designed for commercial requests, but all the same, more or less some conclusions can be drawn from information requests.

We collect data on exchanges and display average indicators and are already guided by them.


I usually collect 2-3 exchanges. The main thing is that all requests are collected for the same exchanges and the average is displayed only for them. And not so that some requests were collected by some exchanges, and others by others and derived the average.

For a more descriptive view, you can apply the KEI formula, which will show the cost of one visitor based on the parameters of the exchanges:

KEI = AverageBudget / (AverageTraffic +0.01)

Divide the average budget for the exchanges by the average traffic forecast for the exchanges, we get the cost of one visitor based on the data of the exchanges.

Competition for mutagen

It is not in the keycollector, but it is not a hindrance. Without any problems, all words can be downloaded to Excel, and then run through the KeyCollector.

Why is Keyso better? He has a larger base compared to competitors. He has it clean, there are no phrases that are duplicated and written in a different order. For example, you will not find such repeated keys “type 1 diabetes”, “type 1 diabetes” there.

Keyso also knows how to fire sites with one counter Adsense, Analytics, Leadia, etc. You can see what other sites there are, the owner of the analyzed site. Yes, and in general for finding competitors' sites, I think this is the best solution.

How do I work with Keyso?

We take any one site of our competitor, it is better, of course, more, but not particularly critical. Because we are going to work in two iterations. We enter it into the field. We zhmakay - analyze.

We receive information on the site, we are interested in competitors here, we click to open everyone.


All competitors are opening up for us.


These are all sites that somehow overlap with our analyzed site. There will be youtube.com, otvet.mail.ru, etc., that is, large portals that write about everything. We do not need them, we need sites purely only on our topic. Therefore, we filter them according to the following criteria.

Similarity is the percentage of shared keys from the total number of a given domain.

Topics - the number of keys of our analyzed site in the keys of a competitor's domain.

Therefore, crossing these parameters will remove shared sites.

We set thematicity to 10, similarity to 4 and see what we get.

It turned out 37 competitors. But we will still check them manually, upload them to Excel and, if necessary, remove unnecessary ones.


Now go to the group report tab and enter all our competitors that we found above. We press - analyze.

We get a list of the keywords of these all sites. But we have not yet fully disclosed the topic. Therefore, we are moving to the competitors of the group.

And now we get all the competitors, all those sites that we have introduced. There are several times more of them, and there are also many general thematic ones. Let's filter them by similarity, let's say 30.

We get 841 competitors.


Here we can see how many pages this site has, traffic and draw conclusions, which competitor is the most effective.

We export all of them to Excel. We go over our hands and leave only the competitors of our niche, you can mark the most effective comrades in order to evaluate them later and see what chips they have on the site, requests that give a lot of traffic.

Now we go back to the group report and add all the competitors we have found and get a list of keywords.

Here we can filter the list at once by “! Wordstat” More than 10.


These are our requests, now we can add them to the KeyCollector and indicate that phrases that are already in any other KeyCollector group are not added.

Now we clean up our keys, and expand, group our semantic core.

Semantic core collection services

There are many organizations in this industry that are ready to offer you clustering services. For example, if you are not ready to spend time learning the intricacies of clustering on your own and doing it yourself, then you can find many specialists who are ready to do this work.

Yadrex

One of the first on the market to use artificial intelligence to create a sematic core. The head of the company is himself a professional webmaster and specialist in SEO technologies, so he guarantees the quality of his employees' work.

In addition, you can call the indicated numbers to get answers to all your questions regarding work.

When ordering services, you will receive a file that will indicate the core content groups and its structure. Additionally, you get a structure in mindmup.

The cost of work varies depending on the volume, the larger the volume of work, the cheaper the cost of one key. The maximum cost for an information project will be 2.9 rubles per key. For the one selling 4.9 rubles per key. Discounts and bonuses are provided for large orders.

Conclusion

This completes the creation of the semantic core for the information site.

I advise you to monitor the history of changes in the KeyCollector program, because it is constantly supplemented with new tools, for example, youtube was recently added for parsing. With the help of new tools, you can further expand your semantic core.

In 2008, I created my first internet project.

It was an online electronics store that needed promotion.

Initially, he handed over the promotion work to the programmers who created it.

What to promote?

They made a list of keys in 5 minutes: mobile phones, camcorders, cameras, iPhones, Samsungs - all categories and products on the site.

These were general names that did not at all resemble a properly composed semantic core.

A long period passed without results.

Incomprehensible reports forced to look for performers specializing in website promotion.

I found a local company, entrusted them with the project, but here it’s all to no avail.

Then the understanding came that real professionals should be engaged in promotion.

After rereading many reviews, I found one of the best freelancers, who assured him of success.

Six months later, there are no results again.

It was the lack of organic results over the past two years that led me to SEO.

Subsequently, this became the main calling.

Now I understand what was wrong in my initial promotion.

These mistakes are repeated by the bulk of even experienced SEO specialists who have spent more than one year on website promotion.

The blunders were wrong work with keywords.

In fact, there was no understanding of what we were promoting.

No time to collect the semantic core, fill out your contact details quickly.








Free tools for composing a syllabus

Free key finders are essential for finding interesting ideas.

You don't have to pay for them, sometimes registration is required.

I will tell you in detail the secrets of how to get ready-made semantics from these tools.

It is quite easy to collect from the list of keys, there are many free and paid tools.

Let's start with 4 popular free resources that I regularly use myself.

1.1. Google keyword planner is the most versatile tool in which you can make filters by region, language.

Interesting in that it makes a large selection of homogeneous keys, shows traffic and the level of competition in contextual advertising.

Requires registration with Google Adwords to work.

It is also important to create at least one company that you don't have to pay for.

All of these processes are visually clear, so let's jump right into the Keyword Planner.

To start working with the tool, click on the wrench (upper right corner) and select “Keyword Planner”.

The screenshot shows the new version of the design.

After that, you will be taken to a page where you can enter many options for keys, make a search on the relevant page or select the desired category.

In the new design interface, we see such a window.

We'll look at both keyword research options.

1. OPTION

You see 2 modules.

  1. Find keywords
  2. Get data on the number of requests and forecasts

When you go to the module "Find keywords", You will receive a form for entering variants of key phrases, which must be separated by commas.

As we can see, the number of received variants has already significantly expanded.

In the old interface, there were no more than 700 of them, in the new one we got 1365 options.

The number of received options is still inferior paid services, which picks up a wider list of low-frequency queries.

In the same window, you can adjust the following functions.

  1. Search query region
  2. Search network: Google or Google + partners
  3. Download the resulting options in the csv-format of the Excel file
  4. Shows data for the year by default, can be adjusted for seasonal requests
  5. Shows the number of options found
  6. Correction of the necessary data or adding filters (by default, only 1 filter, do not show adult content).

Monthly data broken down into beautiful infographics, which is great for viewing seasonal queries.

Also, an important factor from which devices these keys are viewed - desktop and mobile versions.

We go below and get directly a list of keys with frequency, minimum and maximum bid per click.

When going to the module “Get data on the number of queries and forecasts”, we will enter the previously considered queries.

We receive conversion data for the selected keywords: expense, number of conversions, conversion value, clicks.

This is valuable information for budgeting in Google Adwords and a rough comparison to SEO.

I want to immediately upset those who plan to use only this tool.

The correctness of the data is highly questionable.

Renowned SEO expert Rand Fishkin criticized the accuracy of traffic and the correctness of clustering.

Therefore, it is better to additionally use other known resources.

1.2. Wordstat.yandex.ru is an analogue from Yandex, which also shows traffic, homogeneous requests.

To work, you need to log in using Yandex mail or social networks.

Questions: why, who, what, how, where are commonly used words in this segment.

See below for a list of popular words for voice search in the English-speaking segment.

At the same time, I want to warn you - do not over-optimize!

John Mueller, one of the Google analysts, warned about it.

There is no need to specifically modify some of the content for voice search if its quality decreases.

Think about behavioral factors, these are the most important parameters.

1.4. Predict keys. To do this, use the free key collection utility.

I understand that the terminology is complex, so let's look at an example.

Just create one query in the first column of this kind (SYNONYM1 | synonym2 | synonym3) (Synonym4 | synonym5 | synonym6).

For example: (repair | fix | repair) (engine | internal combustion engine).

Enter the regions in the other columns: Moscow, MSC (in the GEO column).

In the “Region” column, write down the number of the region according to Wordstat.

Then if you press the button “GET KEYWORDS (1)” - the button “FIND MORE KEYWORDS (2)” will appear - the system will show the output by Wordstat without taking into account the words that you have already used.

You can also click the lines listed below (3) - to check the issue for the selected groups.

Enter unnecessary words into the MINUS-words column.

The necessary ones are placed in other columns (for convenience, they are labeled as Properties, Types, Nomenclatures, Brands, Transactional Requests, Regions, etc.).

For example, it is clear here that “do it yourself, video, mechanics” - go to the negative, and “diesel, capital, turbine, block, injector” - will be useful to us for subpages and subsections (4).

After each update of the list, press in a circle again “GET KEYWORDS (5)” and “FIND MORE KEYWORDS (6)” - and continue the cycle until there is only one garbage in the output.

The system will substitute already used queries into the negative operator.

The utility's convenience lies in the fact that it excludes repetitions in the Yandex search query, which greatly simplifies the work.

Ready-made lists can be transferred to Excel by clicking on each line or simply by dropping them directly into KeyCollector (after adding a list of negative keywords to the appropriate section).

The speed of parsing semantics can be reduced from several hours to several minutes.

1.5. Ubersuggest - This tool was bought by renowned CEO Neil Patel for $ 120,000.

After that, he invested another 120 thousand dollars. USA on its improvement and does not stop there.

He also promised that Ubersuggest will always be free.

Data for this instrument trades with Google Keyword Planner and Google Suggest.

When using it, no registrations are necessary, which is also a big plus.

This tool does not have Russian version, at the same time it is possible to obtain data on Russian-language keys.

To search for a list of keys, enter a high-frequency query, select a language and a search engine.

An additional option to add a list of negative keywords to the field on the right.

The received data can be downloaded in csv-format of an Excel file.

This functionality is implemented at the bottom of the resulting list.

Paid Key Finder Tools

Paid tools are essential for providing more complete list keys.

They also provide additional important parameters for the analysis of search keys.

I will tell you about 3 paid tools that I personally use.

Many low-frequency queries can also be picked up using SEO resources: serpstat.com, ahrefs.com, semrush.com, moz.com, keywordtool.io and others.

You do not need to pay for everything, choose the ones that suit you best.

These tools are paid, with different monthly plans.

If you need to get access one-time, please contact me for freelancing.

For a small payment (from $ 5) I will provide you with information on your keys.

The free versions of these tools have a limited edition.

To search for low-frequency keys, you must enter a high-frequency query, the selected systems independently expand the possible options.

To request “ plastic windows”With the help of Serpstat we received 5200 variants for Yandex.Moscow, in Google Russia - 3500.

For the same request, Ahrefs generated 7721 variants of different keys.

By the way, Tim Soulo, Ukrainian marketing specialist at Ahrefs, stated, which will give a subscription for six months to the one who shows the service that generates more keys.

The same query in keywordtool.io collected only 744 variants of keys, and this tool only specializes in keywords.

I use it mainly to search for key queries for YouTube, Amazon, ebay.

After collecting the list of keys, it is important to scatter them across the pages of the site or cluster them.

I have mentioned this hard-to-pronounce word “clustering” several times already.

Let's take a closer look at it.

Let's start with a remix of the famous tongue twister to make pronunciation easier :-)

Keyword Clustering

Grouping keys by site pages is one of the most time consuming tasks.

Some do it manually, some pay the corresponding services.

This is time consuming and expensive.

I'll show you a free quick way to group the semantic core.

One of the most common mistakes is incorrect grouping of keywords by the pages of the promoted site or clustering of the semantic core.

It's like building a house and not having a building plan.

Breaking down the list of keys across site pages is the root of any promotion.

A search key is a question asked by an Internet user who wants a relevant answer.

Requests must match the content on the page.

Otherwise, users will start leaving your site.

The search engine will not show in the SERP a resource that has bad behavioral factors.

All of the above tools, when using 3-4 words in a key phrase, reduce the time for grouping keys, while losing many different combinations.

And what if there are really a lot of keys?

Manual clustering of several thousand keys sometimes takes up to several days.

It is necessary to compare the results for different homogeneous keys.

If the pages in the TOP match, then the keys can be combined into one group.

The best way to look at this issue is with an example.

As you can see, the TOP contains the same URLs, so there is no need to create separate pages for these requests, because users are looking for the same content.

Even if several pages are the same in the search results, then the keys can be combined into one group.

The main difficulties in clustering are checking several tens or even hundreds of thousands of keys.

In this situation, mistakes are inevitable.

People are not robots, they get tired.

Somewhere deadlines put pressure on them, they have to do the work incompletely.

This applies even to experienced SEOs.

For many years, seeing the same mistakes, I wanted to find a solution to this issue.

Several paid tools have appeared on the Internet that automate the work of clustering keys.

But this also raises the question of quality, price and lead time.

For example, prices for clustering a list of up to 4000 keys are included in plan B on serpstat.com.

Everything to check on top of the plan costs $ 20 for 3000 keys.

I respect the work of our colleagues who have created irreplaceable SEO tools, but to be honest, even for one medium project, this is very little.

Only one page of the site can lead from several hundred to several thousand keys.

You can understand the pricing policy, the algorithms need to get the results and compare the results across homogeneous pages.

These are expended resources plus a commercial component.

At the same time, the issue is constantly changing, and the pages in the TOP change accordingly.

What was relevant may become irrelevant in a couple of months.

The second drawback is the time, which is leveled by the fact that the process can be started and returned to it when it is completed.

As a rule, it takes up to several hours, depending on the download speed of the service.

We do not like to wait, all the more to pay :-)

Therefore, we studied the problems of key grouping as much as possible and created our own revolutionary keyword clusterizer that solves the main problems:

  • our tool offers free clustering of an unlimited list of keys (if the service is overloaded, we will introduce a limit of up to 10K keys per day);
  • performs clustering in seconds;
  • allows you to set individual settings depending on the issuance requirements;
  • removes junk and irrelevant requests;
  • combines synonyms into one group;
  • minimizes manual labor.

With the help of our clusterer, we have created a turnkey ready-made semantics for an English-language project from 80 thousand keys in just 20 minutes!

The topic is “dating”, and we haven't lost sight of anything.

A month ago I would have said that this is madness, today it is a reality.

The site contains instructions on how to use the tool, as well as a button “How it works”.

Let's take a quick look at the main elements.

Important note, the fields are optional.

It all depends on the keys chosen.

For the primary test, I fill in only one "Count as one word" field.

I will additionally cluster the finished version.

  • Copy the keys, as often as possible, paste them into the form of the clusterer. For example, from wordstat.yandex.ru or from two columns of Excel. The system recognizes keys and numbers as separate components. The data in the final version is distributed correctly.
  • The second option is to load the txt, csv, xls, xlsx formats from a file. You can just grab semantics from Serpstat, Ahrefs, Google Keyword Planner, Key Collector or other tools. It is not necessary to process them specially for the clusterizer. The system itself will distribute everything according to the required parameters. If the calculator does not understand which columns refer to what, a dialog box will appear with specification of the selected columns.
  • Next, select the frequency level: HF (high-frequency), MF (mid-frequency), LF (low-frequency), MF (micro-frequency). Everything here is individual, try different options and check with real results.
  • Be sure to check the box "Consider geo-dependency" or not. For example, you are promoting websites in the city of Kharkov. In the TOP, many pages are not optimized for it, which means that geo-dependence fades into the background. If your main request is “repair of refrigerators in Kharkov”, then you need to take into account geo-dependence.
  • "Advanced Clusters for Semantics" groups non-clustered queries into the most relevant groups. If you disable this function, keys without groups will go to the section “ Not grouped. "
  • Next, fill out the form "Count as one word"... This is necessary in order to combine several words into a single whole. As a result, the system will not split phrases into separate clusters. For example: a washing machine. The system will not divide words such as “washing” and “machine” into 2 clusters. Other examples: baby clothes, iPhone 8, online electronics store. If you have several such phrases, enter them separated by commas.
  • Negative keywords are needed to immediately eliminate irrelevant keys from the list. For example, the word “free”. In order not to filter out phrases such as "free shipping", use the operator Exclamation point“!”. Then you will prevent the system from declining this word. For example:! Free.
  • The list of ignored words are those words that do not affect the search results. The system automatically ignores prepositions in the Russian and English segments, so it is not necessary to enter them. For example, the phrase “ Apple iPhone X ”. The word “Apple” does not affect the search results in any way, because users are looking for data on the iPhone. In order not to create an extra cluster, add it to this form.
  • The last form is synonyms. For example, the words “buy”, “price”, “cost” mean the same thing for commercial inquiries. The system automatically recognizes them as synonyms, so it is not necessary to enter them. Enter other synonyms: “iPhone”, “iPhone” or “choose”, “choose”, they have the same meaning in the Russian-speaking segment. If there are many synonyms, click the plus and add other options.

To get the final version, click "SEARCH" and you get a clustered list.

Select the relevant keys with check marks.

We compared the results with paid clusterizers, the accuracy of the data obtained in our tool is higher.

The convenience and speed of work in it is better than even in Excel, which is slow when adding a huge list of keys and a large number of formulas.

I would post the results of our comparisons, but I think this will be incorrect in relation to our colleagues.

Plus, on our part, it is biased to give examples that can be considered successful.

Therefore, I leave everything to the readers' judgment.

I would be glad to hear your opinion in the comments.

Of course, our clusterizer is not a magic pill that solves all problems.

Even Google's tools don't show accurate data in clustering.

Our clusterizer is a tremendous time saving.

Ready-made lists are easier to check and organize by site pages.

Promotion of low-frequency queries

Promotion for low-frequency queries is a start for any young project.

Do not try to knock out of the TOP-10 experienced large projects with a limited budget.

I will show you effective ways to find low frequency keys.

The bulk of young site owners initially selects high-frequency and medium-frequency queries.

These are keys like “ buy iphone”, “apartments for rent" etc.

According to them, the TOP was occupied by high-trust sites that clearly do not want to leave it.

SEO budgets for such resources are many times higher, plus additional trust helps to promote them with less effort.

You will never move in the TOP of sites with millions of traffic, which everyone knows about.

A young resource needs to focus on low-frequency queries, while, according to the MOZ analysis, 80% of all Internet sales come from low-frequency queries.

Low-frequency queries contain 4 or more words with a frequency of up to 1000 people per month.

Create content for them and get traffic in the near future.

You can search for low-frequency queries using a variety of tools.

Let's take a look at the main ones.

4.1. Use search suggestions: Google, Yandex, Bing, Facebook, Twitter, Pinterest, Wikipedia, Amazon, any other sites that have such a function.

This is, of course, a large amount of handmade work and a headache, but this approach allows you to find real keys for promotion.

4.2. Use forums that represent your topic, especially the likes of Reddit.

Find threads that have collected a lot of comments on your topic.

Copy the name of the branch and create content for these keys.

Let's take an example of how to compete for well-known queries with such monsters as Amazon, Expedia, Yelp in the American segment.

For example, you are promoting the request “ticket fly”.

Sites such as Expedia, Kayak, which have more than 4 million traffic for branded queries alone, are ranked with these keys!

Check the SERP, the first 4 sites are contextual advertising.

And then there are only monsters in organic, which have traffic of at least several million.

Believe me, it is not realistic to compete with them on these keys.

You need to look for queries that these resources do not promote.

Many Western SEO companies do not use key guessing tools at all for small commercial sites.

Enter your main query into a Reddit search.

Check out popular threads that have scored a lot of points and comments.

Copy the title or its main part.

For example, I entered the “fly ticket” key into the Reddit search and browse the popular threads.

Don't be fooled by the predicted traffic only for the keys that you see in the topic of the thread.

If your goal is to get into the TOP and receive traffic, then you need to analyze this parameter.

Some experts check the cost of a click and the level of competition of contextual advertising, but this data can differ significantly from the indicators in SEO.

This is more interesting for informational purposes, but not for determining the budget for SEO.

To analyze the level of competition in SEO, it is best to use ahrefs.com, majestic.com, moz.com, semrush.com.

Recently semrush has merged the donor bases with majestic, so the quality of donor screening there is also excellent.

Do not try to move highly competitive queries with a small budget.

Instead, focus on low-concurrency keys.

LSI (homogeneous queries)

Uniform Requests (LSI) increase content visibility and thus traffic.

More traffic means more sales.

I will show you everything effective methods LSI search.

LSI (Latent Semantic Index) are homogeneous queries that are shown at the bottom of the SERP.

The search engine uses them for readers who have not found useful information in the TOP 10, so that they can form a request in a different way.

With the help of such keys, you can expand the content or create a new one.

It already depends on your clustering.

When promoting a site in another region, homogeneous requests are shown under your existing IP.

In such a situation, adjustments are made to requests for your region.

If you do not want to play with changing IP, use the application for Google Chrome - GeoClever.

After installing it, right in the search, you can select any city in the world, up to the little-known.

A quick list of search suggestions can be obtained using wordstat.yandex.ru.

To do this, after entering the main key, look through the right block.

Let's check out the query “SEO optimization”.

As you can see, there are more options received than in Yandex and Google.

Well, if you want to collect all the homogeneous queries for Youtube, Bing, Yahoo, Ebay, Amazon, Google, which only software can collect, then use Scrapebox (see point 5).

The disadvantage of this program is that it costs $ 67.

For work, it requires the use of an IP base, which can be bought on the Internet.

The good news is that a large number of search suggestions are hard to get elsewhere.

Also, the software is multifunctional, it helps to automate many other manual processes.

With the help of Scrapebox, I have collected 7786 results for the query “SEO Optimization”.

Of course, many of these keys are junk keys.

Use the clusterer from point 3 to filter out unnecessary keys.

Also in the program you can check the real traffic of the selected requests.

Pareto principle

Prioritizing is essential to getting results.

To do this, use the Pareto principle.

I will show you the most effective methods for choosing priority keys for promotion.

The Italian economist Vilfredo Pareto discovered in 1886 the principle that 20% of efforts yield 80% of results.

He found that 20% of the Italian population own 80% of the land area, 20% of the pea bushes provide 80% of the crop.

This principle still works today.

Dear women, it follows that men prepare in advance for congratulations.

But SEOs should be ready to promote these keys even earlier.

Don't try to promote a highly competitive request in a short time frame.

It's like pumping up a month before the beach season.

Those who did not have time were late.

Get ready, as I always do, for the next year.

Optimizing meta information

Meta information's job is to tell the user what your page is about.

Also, meta tags help the search engine to match the keys with the content of the site.

I will show you how to properly optimize the meta information on your website.

Got a list of keys, paginated?

Now move on to creating meta information - Title & Description.

Many people share this process and transfer the writing of tags to copywriters.

Do not do this under any circumstances.

Even great copywriters get their meta tags wrong.

As a result, there will be no traffic. Your keys do not match your content.

And, as we know, the more clicks, the more conversions.

Search engines will not show sites in the TOP that they do not go to.

They regard them as irrelevant.

Let's first take a look at what meta tags are and where they can be found.

Title is the code of the site page, which looks like this: This is the title of your page

It is embedded in the page code, not displayed in the internal content.

You can meet it in the tab of your browser.

When sharing a page on social networks, for example on Facebook.

The most important display is when validating search results.

Description meta tag, or short description page is displayed in the code as follows:

The size of the displayed meta Description tag on Google is about 160 characters.

This meta tag can be omitted, unlike Title.

In such a situation, the search engine selects content from your page that will be most relevant to the search keys.

If you are not sure about the automatic selection of a search engine, write Description.

How do you increase the click-through rate of your Title?

The principle is simple: in the search form, insert the URL or write a potential Title.

In the results obtained, the system gives you a score from 0 to 100 and gives you recommendations for optimization.

Let's take a closer look at the Title optimization techniques.

9.1. Add emoticons

They attract more attention than the standard ones: 10, 20, 50, etc.

Why do you think the Title of this article is “The Semantic Core of the Site: 9 Examples of Compilation (Course 2018)”?

The number “9” is more real than 10, 20, 50, 100 ...

Strange numbers do not cause a feeling of understatement or, on the contrary, compression of information, because in this situation we have listed our best methods of compiling the SY, and did not stretch the tenth one yet.

Using brackets increases click-through rates by 38%, according to Hubspot analysis.

This is a huge plus, because you can insert synonyms in brackets or highlight important data.

Use different brackets: round, square.

9.4. Arouse curiosity

The best way to provoke a click is to generate curiosity.

Think about what might make your audience feel this way.

Think of this breaking news that a star has died.

When you go to the site, it turns out that it was fake information.

The main goal is the click!

The best example is Dale Carnegie and the titles of his books: How to Win Friends and Influence People, How to Stop Worrying and Start Living.

For several generations, these names have provoked people to read his work.

9.5. Include words of encouragement

A lot of prompting words are included in the content, but can they also be added to the Title?

To do this, use different options: discounts, cheap, get free, download.

To search for prompting words, analyze the search results for contextual advertising.

In Google Adwords and Yandex.Direct, it is very important to attract a lot of clicks.

If your ad is not clicked on, then the click will be more expensive for you, which is why contextology pays special attention to this.

How to find prompting words, let's look at examples.

Let's enter the search keys “buy an iPhone 8 Kiev”.

From here you collect the base of prompting words, choose those that correspond to the content on the site.

Another technique is used when setting up remarketing.

Marketers lure those who left the cart without a purchase with additional discounts.

This significantly increases the percentage of sales.

Try this same trick when filling in your Title.

Offer discounts, promotions. People love this very much!

9.6. Use your domain in Title

When I in the Everest technique told about the use of the domain in Title, some Internet users wrote in the comments that this is complete nonsense.

Honestly, I thought so too.

I didn't understand why many sites use their brand in a short name.

You can add additional keys there instead.

My opinion changed dramatically after I read a lot of research on the subject.

The bottom line is your openness to Internet users.

It is the addition of the brand that significantly increases the click-through rate.

It is best to use your brand at the end of the Title.

Putting it in at the beginning pushes attention away from the main keys.

9.7. Capitalize each word

It is this point that causes most doubts among many SEOs.

They doubt that this is in line with the rules of the Russian language.

In English, it is considered competently correct when each word in the title is written with a capital letter.

Let's take a look at how everything happens in contextual advertising.

As you can see from the screenshot, this technique is used not only in Title, but also in the description.

Why is this done?

Capital letters get more attention and the percentage of clicks increases accordingly.

In Russian-speaking organics, this technique is rarely used, so I leave everything to the readers' judgment.

Personally, I have not found the rules of the Russian language, which indicate that this is not correct.

I suggest discussing it in the comments.

OUTPUT

Initially, I wanted to write an article about SEO site optimization and started with keywords.

But in the process of creating the material, it became clear that there is a lot of information on this issue.

This is how we ended up with an article about searching and composing keywords.

You don't need to be limited to one tool to find keys.

This is somewhat like a brainstorming session (see point 1), where all the ideas are initially collected for several days by the whole team.

At the final stage, real ideas are singled out from good ones, for which there is time and resources.

It's the same with keys: initially, you need to collect a huge list of requests.

For this, it is important to use paid and free tools.

The next step is to eliminate a lot of irrelevant keys and keep only those that fit your purposes.

To do this, use the Keyword Clusterizer, which will collect all the keys into groups.

They should be in line with your priorities.

Don't try to promote everything.

A bird in the hand is worth two in the bush.

Use the Pareto principle - 20% of goods generate 80% of the profit.

Focus on the low-frequency keys that account for 80% of all online sales.

Don't try to wrestle with a tight budget with big expert sites that are pouring millions into promotion.

Better find your niche.

Use forums and search tips for this.

Use LSI (homogeneous queries) to expand the list of keys and existing content.

Check the seasonality of your chosen keys with Google Trends.

Prepare in a timely manner for the promotion.

Don't put it off shortly.

Optimize meta information for the selected keys, especially Title.

This is the second internal ranking algorithm, it depends on its attractiveness whether a visitor will go to your site or not.

If you are reading these lines, then you have mastered the article, for which I am incredibly grateful to you.

I propose to continue the discussion in the comments.

Organic search is the most effective source of targeted traffic. To use it, you need to make the site interesting and visible to users of the search engines Yandex and Google. There is no need to reinvent the wheel here: it is enough to define what the audience of your project is interested in and how they seek information. This task is solved when building a semantic core.

Semantic core- a set of words and phrases that reflect the topic and structure of the site. Semantics- a section of linguistics that studies the semantic content of language units. Therefore, the terms "semantic core" and "semantic core" are identical. Remember this line, it will not let you slide into keyword stuffing or stuffing content with keywords.

Composing the semantic core, you answer the global question: what information can be found on the site. Since one of the main principles of business and marketing is customer focus, you can look at the creation of the semantic core from the other side. You need to determine what search terms users use to search for information that will be published on the site.

The construction of a core of meaning solves another problem. We are talking about the distribution of search phrases across the pages of the resource. By working with the engine, you determine which page will respond best to a particular search query or group of queries.

There are two approaches to solving this problem.

  • The first assumes creation of the site structure based on the results of the analysis of the user's search queries... In this case, the semantic core defines the framework and architecture of the resource.
  • The second approach involves preliminary planning of the resource structure before analyzing search queries... In this case, the semantic core is distributed over the finished framework.

Both approaches work in one way or another. But it is more logical to first plan the structure of the site, and then determine the requests by which users will be able to find this or that page. In this case, you remain proactive: you yourself choose what you want to tell potential customers. If you match the resource structure to the keys, then you remain an object and react to the environment, rather than actively changing it.

The difference between SEO and marketing approaches to building a core needs to be clearly emphasized here. Here's the logic of a typical old-school SEO: to build a website, you need to find keywords and select phrases that will just get to the top of the results. After that, you need to create a site structure and distribute the keys across the pages. The page content needs to be optimized for key phrases.

This is the logic of a businessman or a marketer: you need to decide what information to broadcast to the audience using the site. To do this, you need to know your industry and business well. First, you need to plan a rough site structure and a preliminary list of pages. After that, when building a semantic core, you need to find out how the audience is looking for information. With the help of content, you need to answer the questions that the audience asks.

What are the negative consequences of using the "SEO" approach in practice? Due to the development according to the principle of "dancing from the stove", the information value of the resource decreases. The business must shape trends and choose what to say to customers. A business should not limit itself to reactions to the statistics of search phrases and create pages only for the sake of optimizing the site for some key.

The planned result of building a semantic core is a list of key queries distributed across the pages of the site. It contains page URLs, search queries and an indication of their frequency.

How to build a site structure

The site structure is a hierarchical page layout. With its help, you solve several problems: you plan the information policy and the logic of information presentation, ensure the usability of the resource, and ensure that the site meets the requirements of search engines.

To build a structure, use a convenient tool: spreadsheet editors, Word or other software. You can also draw the structure on a piece of paper.

When planning your hierarchy, answer two questions:

  1. What information do you want to communicate to users?
  2. Where should this or that information block be published?

Imagine planning a site structure for a small pastry shop. The resource includes information pages, a publications section, and a showcase or product catalog. Visually, the structure might look like this:

For further work with the semantic core, arrange the site structure in the form of a table. In it, indicate the names of the pages and indicate their subordination. Also include columns in the table for page URLs, keywords, and frequency. The table might look like this:

You will fill in the URL, Keys and Frequency columns later. Now go to search for keywords.

What you need to know about keywords

To find a semantic core, you must understand what are keywords and what keys the audience is using... With this knowledge, you will be able to correctly use one of the keyword research tools.

What keys are used by the audience

Keys are words or phrases that potential customers use to find the information they need. For example, to make a cake, the user enters the query "Napoleon recipe with photo" into the search box.

Keywords are classified according to several criteria. By popularity, high-, medium- and low-frequency queries... According to various sources, search phrases are grouped as follows:

  • TO low-frequency includes requests with a frequency of impressions up to 100 per month. Some experts include requests with a frequency of up to 1000 impressions in the group.
  • TO mid-range includes requests with a frequency of up to 1000 impressions. Sometimes experts increase the threshold to 5,000 impressions.
  • TO high frequency queries include phrases with a frequency of 1000 impressions. Some authors consider high-frequency keys with 5,000 or even 10,000 requests.

The difference in the frequency estimate is due to the different popularity of the topics. If you're building a core for an online laptop store, the phrase “buy laptop samsung»With a frequency of about 6 thousand per month will be medium-frequency. If you are building a core for a sports club site, the request for "aikido section" with a frequency of about 1000 requests will be high-frequency.

What do you need to know about frequency when composing a semantic core? According to various sources, from two-thirds to four-fifths of all user requests are low-frequency. Therefore, you need to build as broad a semantic core as possible. In practice, it must constantly expand with low-frequency phrases.

Does this mean that high and medium frequency requests can be ignored? No, you can't do without them. But consider low-frequency keys as the main resource for attracting targeted visitors.

According to the needs of users, the keys are combined into the following groups:

  • Information... The audience uses them to find information. Examples of information requests: "how to store baked goods correctly", "how to separate the yolk from the protein."
  • Transactional... Users enter them when they plan to take an action. This group includes the keys "buy a bread maker", "download a recipe book", "order pizza with delivery."
  • Other inquiries... These are key phrases for which it is difficult to determine the user's intent. For example, when a person uses the "cake" key, they may be planning to buy a culinary product or prepare one themselves. In addition, the user may be interested in information about the cakes: definition, features, classification, etc.

Some experts distinguish navigation queries into a separate group. With their help, the audience searches for information on specific sites. Here are some examples: "laptops connected", "city express track delivery", "sign up for LinkedIn." Navigation queries that are not specific to your business can be ignored when compiling the semantic core.

How to use this classification method when constructing a semantic core? First, you must consider the needs of your audience when distributing keys across pages and creating your content plan. Everything is obvious here: publications of informational sections should respond to information requests. There should also be most of the key phrases without express intention. Transactional questions should be answered by pages from the "Store" or "Showcase" sections.

Second, remember that many transactional issues are commercial. To attract natural traffic for the request “buy a Samsung smartphone”, you will have to compete with Euroset, Eldorado and other business heavyweights. To avoid unequal competition, you can use the advice above. Expand your kernel as much as possible and reduce the request rate. For example, the frequency of the request “buy a Samsung Galaxy s6 smartphone” is an order of magnitude lower than the frequency of the “Buy a Samsung Galaxy smartphone” key.

What you need to know about the anatomy of search queries

Search phrases consist of several parts: body, specifier and tail... This can be seen with an example.

What about the query "cake"? It cannot be used to determine the user's intent. It is high-frequency, which determines the high competition in the search results. Using this request for promotion will bring a large proportion of non-targeted traffic, which negatively affects behavioral metrics. The high frequency and non-specificity of the “cake” request is determined by its anatomy: it consists only of the body.

Pay attention to the request "buy a cake". It consists of the body "cake" and the qualifier "buy". The latter determines the user's intent. Specifiers indicate that the key belongs to transactional or informational. Take a look at examples:

  • Buy a cake.
  • Cake recipes.
  • How to serve a cake.

Sometimes specifiers can express exactly the opposite intent of the user. A simple example: users are planning to buy or sell a car.

Now look at the query "buy cake with delivery". It consists of a body, a specifier, and a tail. The latter does not change, but details the intention or informational need of the user. Take a look at examples:

  • Buy cake online.
  • Buy a cake in Tula with delivery.
  • Buy homemade cake in Oryol.

In each case, the intention of the person to purchase the cake is visible. And the tail of the key phrase details this need.

Knowledge of the anatomy of search phrases allows you to derive a conditional formula for the selection of keys for the semantic core. You must define basic terms related to your business, product, and user needs. For example, customers of a confectionery firm are interested in cakes, pastries, cookies, pastries, cupcakes and other confectionery products.

After that, you need to find the tails and specifiers that the project audience uses with the basic terms. With tailed phrases, you simultaneously increase reach and reduce core competitiveness.

Long tail or long tail is a term that defines the strategy for promoting a resource for low-frequency keywords. It consists in using the maximum number of keys with a low contention level. Low-frequency promotion ensures high efficiency of marketing campaigns. This is due to the following factors:

  • Promotion by low-frequency keywords requires less effort compared to promotion by high-frequency competitive requests.
  • Working with long-tail queries is guaranteed to bring results, although marketers cannot always predict exactly which keywords will generate traffic. When dealing with high-volume queries, decent marketers cannot guarantee results.
  • Low-frequency devices provide a higher specificity of the output results to the needs of users.

For large sites, the semantic core can contain tens of thousands of requests, and it is almost impossible to select and correctly group them by hand.

Services for compiling the semantic core

There are quite a few keyword research tools out there. You can build a kernel using paid or free services and programs. Choose a specific remedy depending on your tasks.

Key collector

You cannot do without this tool if you are engaged in internet marketing professionally, develop several sites or compose the core for a large site. Here is a list of the main tasks that the program solves:

  • Selection of keywords. Key Collector collects queries through Yandex's Wordstat.
  • Parsing search suggestions.
  • Clipping inappropriate search phrases with stop words.
  • Filtering requests by frequency.
  • Search for implicit duplicate queries.
  • Definition of seasonal requests.
  • Collection of statistics from third-party services and platforms: Liveinternet.ru, Metrika, Google Analytics, Google AdWords, Direct, Vkontakte and others.
  • Search for pages relevant to the request.
  • Search query clustering.

Key collector is a multifunctional tool that automates the operations required to build a semantic core. The program is paid. You can perform all the actions that Key Collector "knows how" with the help of alternative free tools. But for this you will have to use several services and programs.

SlovoEB

This is a free tool from the creators of Key Collector. The program collects keywords through Wordstat, determines the frequency of queries, parses search suggestions.

To use the program, in the settings, specify the username and password for the "Direct" account. Do not use your main account, as Yandex may block it for automatic requests.

Create a new project. On the "Data" tab, select the "Add phrases" option. Specify search phrases that the project audience is supposed to use to find information about products.

In the "Collect keywords and statistics" section of the menu, select the required option and run the program. For example, determine the frequency of key phrases.

The tool allows you to select keywords, as well as automatically perform some tasks related to the analysis and grouping of queries.

Keyword selection service Yandex Wordstat

To see which phrases a page is displayed for in Yandex search results, open the "Search queries" tab in the Yandex.Webmaster panel -> "Recent requests".

We see the phrases for which there were clicks or the site snippet was shown in the TOP-50 of Yandex for the last 7 days.

To view data only for the page that interests us, you need to use filters.

The possibilities of searching for additional phrases in Yandex.Webmaster are not limited to this.

Go to the "Search queries" tab -> Recommended Requests.

There may not be many phrases here, but you can find additional phrases for which the promoted page does not fall into the TOP-50.

Request history

The big disadvantage of the visibility analysis in Yandex.Webmaster, of course, is that there is data only for the last 7 days. To get around this limitation a little, you can try to supplement the list using the "Searches" tab -> "Request history".

Here you will need to select "Popular Searches".

You will receive a list of the most popular phrases from the last 3 months.

To get phrases from Google Search Console, go to the "Search traffic" tab -> "Analysis of search queries." Next, select "Impressions", "CTR", "Clicks". This will allow you to see more information that can be useful when analyzing phrases.

By default, the tab displays data for 28 days, but you can expand the range to 90 days. You can also select the desired country.

As a result, we get a list of requests, similar to the one shown in the screenshot.

New version of Search Console

Google has already made some tools available new version panels. To view requests for a page, go to the "Status" tab - > "Efficiency".

In the new version, the filters are arranged differently, but the filtering logic is preserved. I think there is no point in dwelling on this question. Of the significant differences, it is worth noting the ability to analyze data for a longer period, and not just for 90 days. A significant advantage when compared to Yandex.Webmaster (only 7 days).

Analysis services for competing websites

Competitor sites are a great source of keyword ideas. If you are interested in specific page, you can manually determine the search phrases for which it is optimized. To find the main keys, it is usually enough to read the material or check the content of the keywords meta tag in the page code. You can also use services for semantic analysis of texts, for example, Istio or Advego.

If you need to analyze the entire site, use the services of complex competitive analysis:

You can use other tools to collect key phrases as well. Here are some examples: Google Trends, WordTracker, WordStream, Ubersuggest, Topvisor... But do not rush to master all services and programs at once. If you are building the semantic core for your own small site, use free tool, for example, by the Yandex keyword selection service or by the Google planner.

How to find keywords for the semantic core

The process of selecting key phrases is combined in several stages:

  1. In the first, you will define the base keywords that the audience uses to search for your product or business.
  2. The second stage is devoted to the expansion of the semantic core.
  3. In the third step, you will remove inappropriate search phrases.

Defining base keys

Fill in a spreadsheet or write down general search phrases related to your business and products. Gather colleagues and brainstorm. Record all proposed ideas without discussion.

Your list will look something like this:

Most of the keys you wrote down are high in frequency and low in specificity. In order to get high-specificity mid- and low-frequency search phrases, you need to expand your core as much as possible.

Expanding the semantic core

You will accomplish this task using keyword research tools such as Wordstat. If your business has a regional binding, select the appropriate region in the settings.

Using the service for selecting key phrases, you need to analyze all the keys recorded at the previous stage.

Copy the phrases from the left column of Wordstat and paste into the table. Pay attention to the right column of Wordstat. In it, Yandex offers phrases that people used in conjunction with the main query. Depending on the content, you can immediately select the appropriate keys from the right column or copy the entire list. In the second case, inappropriate requests will be eliminated at the next stage.

And the result of this stage of work will be a list of search phrases for each basic key that you brainstormed. Lists can contain hundreds or thousands of queries.

Remove inappropriate search phrases

This is the most time consuming stage of working with the kernel. You need to manually remove inappropriate search phrases from the kernel.

Do not use frequency, concurrency, or other purely "SEO" metrics as a criterion for evaluating keys. Do you know why old-school SEOs consider certain search phrases to be junk? For example, take the "diet cake" key. The Wordstat service predicts 3 impressions for it per month in the Cherepovets region.

To promote pages for specific keywords, old school SEOs bought or rented links. By the way, some experts still use this approach. It is clear that search phrases with low frequency in most cases do not pay off the money spent on buying links.

Now look at the phrase "diet cakes" through the eyes of the average marketer. Some representatives of the CA confectionery company are really interested in such products. Therefore, the key can and should be included in the semantic core. If a pastry shop prepares appropriate foods, the phrase will come in handy in the product description section. If the company for some reason does not work with diet cakes, the key can be used as a content idea for the information section.

What phrases can be safely excluded from the list? Here are some examples:

  • Keys mentioning competing brands.
  • Keys that mention products or services that you do not sell or plan to sell.
  • Keys with the words "inexpensive", "cheap", "at a discount" included. If you are not dumping, cut off those who love the cheap so as not to spoil the behavioral metrics.
  • Duplicate keys. For example, of the three keys "custom-made cakes for a birthday", "cakes to order for the day" and "cakes to order for birth", it is enough to leave the first one.
  • Keys mentioning inappropriate regions or addresses. For example, if you serve residents of the Northern District of Cherepovets, the key "cakes to order industrial district" is not suitable for you.
  • Phrases entered with errors or typos. Search engines understand that the user is looking for croissants, even if they enter the key "croissants" in the search box.

After deleting the inappropriate phrases, you got a list of queries for the basic key "cakes to order". The same lists need to be drawn up for other base clues from the brainstorming phase. After that, move on to grouping key phrases.

How to group keywords and build a relevance map

The search phrases that users use to find or will find your site are combined into semantic clusters, a process called clustering search queries... These are similar groups of requests. For example, the semantic cluster "Cake" includes all key phrases associated with this word: cake recipes, ordering a cake, photo of cakes, wedding cake, etc.

Semantic cluster is a group of queries united in the sense. It is a multi-level structure. Inside the first order cluster "Cake" there are clusters of the second order "Cake recipes", "Ordering cakes", "Photos of cakes". Within the cluster of the second order "Cake Recipes" it is theoretically possible to distinguish the third order of clustering: "Recipes for cakes with mastic", "Recipes for biscuit cakes", "Recipes for shortbread cakes". The number of levels in a cluster depends on the breadth of the topic. In practice, in most topics, it is sufficient to single out second-order clusters specific to business within the first-order clusters.

In theory, a semantic cluster can have many levels.
In practice, you will have to work with clusters of the first and second levels.

Most of the first level clusters you brainstorm when you write down your basic key phrases. To do this, it is enough to understand your own business, as well as peep at the site diagram that you drew up before starting work on the semantic core.

It is very important to correctly perform clustering at the second level. Here, search phrases are changed with specifiers that indicate user intent. A simple example is the cake recipe and custom cake clusters. The first search phrases are used by people in need of information. Cluster 2 keys are used by customers who want to buy a cake.

You have identified the search phrases for the cake-to-order cluster using Wordstat and manual filtering. They must be distributed between the pages of the "Cakes" section.

For example, in the cluster there are searches for “custom-made football cakes” and “custom-made soccer cakes”.

If there is a corresponding product in the assortment of the company, it is necessary to create a corresponding page in the section "Mastic Cakes". Add it to the site structure: include the name, URL and search phrases with frequency.

Use Keyword Research or similar tools to see what other search phrases potential customers are using to find soccer-themed cakes. List the pages that are relevant to your keyword list.

In the list of cluster search phrases, mark the distributed keys in a convenient way. Distribute the remaining search phrases.

If necessary, change the structure of the site: create new sections and categories. For example, the page "Custom Paw Patrol Cakes" should go under the "Baby Cakes" section. At the same time, it can be included in the "Mastic Cakes" section.

Pay attention to two points. First, the cluster may not have matching phrases for the page you are planning to create. This can happen for a variety of reasons. For example, these include the imperfection of tools for collecting search phrases or their incorrect use, as well as the low popularity of the product.

The absence of a suitable key in the cluster is not a reason to refuse to create a page and sell a product. For example, imagine that a confectionery company sells children's cakes featuring Peppa Pig's characters. If the corresponding keys are not included in the list, clarify the needs of the audience using Wordstat or another service. In most cases, there will be suitable queries.

Secondly, even after removing unnecessary keys, search phrases may remain in the cluster that are not suitable for the created and scheduled pages. They can be ignored or used in another cluster. For example, if a pastry shop for some reason fundamentally does not sell Napoleon cake, the corresponding key phrases can be used in the Recipes section.

Search query clustering

Grouping of search queries can be done manually, in Excel or Google spreadsheets, or automated, using special applications and services.

Clustering allows you to understand how requests can be distributed across the pages of the site for their fastest and most effective promotion.

Automatic clustering or grouping of search queries of the semantic core is carried out based on the analysis of sites included in the TOP-10 results of the search engines Google and Yandex.

How automatic query grouping works: for each of the requests, the search results are among the TOP-10 sites. If there are matches among at least 4-6 of them, then the requests can be grouped to be placed on one page.

Automatic grouping is the fastest and effective method combining keywords to form an almost ready-to-use site structure.

If it is not true, from the point of view of search engine statistics, it will be impossible, alas, to form the structure of the site and distribute requests among its pages, to successfully promote the pages to the TOP!

Applications and services for automatic grouping of search queries

Among the services that automate the grouping of keywords, it is worth highlighting:

  • Key Collector.
  • Rush Analytics.
  • TopVisor.

After distributing all the keys, you will receive a list of existing and planned site pages with URL, search phrases and frequency. What to do with them next?

What to do with the semantic core

A table with a semantic core should become a roadmap and the main source of ideas when forming:

Look: you have a list with preliminary page titles and search phrases. They define the needs of the audience. When drawing up a content plan, you just need to clarify the title of the page or publication. Include your main search term in it. This is not always the most popular key. In addition to popularity, the query in the title should best reflect the needs of the page audience.

Use the rest of the search phrases as an answer to the question "what to write about." Remember, you don't need to write all the search phrases in your newsletter or product description by all means. The content should cover the topic and answer users' questions. Note again that you need to focus on information needs, not search phrases and how they fit into the text.

Semantic core for online stores

The specificity of the preparation and clustering of semantics lies in the presence of four very important, for the subsequent, groups of pages:

  • Home page.
  • Pages of sections and subsections of the catalog.
  • Product card pages.
  • Blog article pages.

Above, we have already talked about different types of search queries: informational, transactional, commercial, navigation. For pages of sections and products of an online store, first of all, transactional ones are interesting, i.e. queries using which search engine users want to see sites where they can make a purchase.

It is necessary to start forming the core with a list of products that you are already selling or planning to sell.

For online stores:

  • as " body»Requests will be made product names;
  • as " specifiers"Phrases:" buy», « price», « sale», « to order», « Photo», « description», «