The semantic core of the article is an example. How to build a semantic core from scratch? Services for compiling the semantic core
The semantic core is the basis for website promotion on the Web. Without it, it will not be possible to bring the site to the top for a long time. We will tell you what it is made of, where to look for what and what tools to use for this.
What is the semantic core
To simplify understanding, let's assume that semantic core(СЯ) - these are all those words, phrases and their variations that fully describe the content of your site. The more accurate and high-quality the core is assembled, the easier it is to promote the site.
Roughly speaking, this is one big long list of words and phrases (keywords) by which users search for similar goods and services. There are no general guidelines for kernel size, but there is one rule: the larger and better quality, the better. The main thing is not to artificially inflate the size in order to make the core larger. If you chase size at the expense of quality, all work will go down the drain - the kernel will not work.
Let's give an analogy. Imagine that you are the head of a large construction company who needs to build a lot of objects in a short time. You have an unlimited budget, but you need to hire at least a hundred people - this is the requirement of the union. Which one hundred people will you hire for such a responsible job - anybody at all, or will you carefully select, since the budget allows? And after all, whom you type - to build houses with those. It is reasonable to assume that you will choose carefully, because the result depends on it.
It's the same with the kernel. To make it work even on entry level, it would be nice if it had at least a hundred keys. And if you write anything into the core, just a little more, the result will be guaranteed to fail.
General rules for constructing a semantic core
One request - one page. You need to understand which one page you need to send the user to for each request. You cannot do so that there are several pages per request: internal competition arises and the quality of promotion drops sharply.
The user receives predictable content upon his request. If a customer is looking for shipping methods to their region, do not submit them to the home page of the site if they are not there. Sometimes it happens that after compiling the core, it becomes clear that you need to create new pages for search queries. This is normal and common practice.
The core contains all types of requests (HF, MF, LF). About the frequency - below. Just keep this rule in mind as you read on. In simple terms, you have to distribute these requests to specific pages on your site.
An example of a kernel distribution table for site pages.
Kernel collection methods
Wrong: copy from competitors
A way when there is no time and money, but the core needs to be somehow assembled. We find several of our direct competitors, the cooler the better, and then use, for example, spywords.ru to get a list of keys. We do this with everyone, combine requests, throw out duplicates - and we get a base from which you can already somehow build on.
The disadvantages of this approach are obvious: not the fact that you need to promote for the same queries; parsing and tidying up such a kernel can take a lot of time.
Sometimes it happens that even identical competitors have their own specifics in their queries, which they take into account, but you do not. Or they focus on one thing, and you do not do it at all - the keys work into the void and lower the rating.
On the other hand, it takes a lot of time, effort, and sometimes money to pay for such work to bring such a base back to normal. When you start counting the economy (and this should always be done in marketing), you often realize that the costs of creating your own core from scratch will be the same, or even less.
We do not recommend using this method unless you have a complete disaster with your project and need to start somehow. Anyway, after the launch, you will have to redo almost everything, and the work will be useless.
Correct: make your own semantic core from scratch
To do this, we fully study the site, we understand what audience we want to attract, with what problems, requirements and questions. We think about how they will look for us, correlate it with the target audience, adjust the goals, if necessary.
Such work takes a lot of time, it is unrealistic to do everything in a day. In our experience, the minimum time to collect a kernel is a week, provided that a person will work full time only on this project. Remember that the semantic core is the foundation of promotion. The more accurately we compose it, the easier it will be at all other stages.
There is one danger that newbies forget about. The semantic core is not something that is done once and for all life. We are constantly working on it, changing the business, requests and keywords. Something disappears, something becomes obsolete, and all this must be immediately reflected in the core. This does not mean that you can do it badly at first, since then you can finish it anyway. This means that the more accurate the kernel, the faster you can make changes to it.
Such work is initially expensive, even within the company (if you do not order SN from an external company), because it requires qualifications, an understanding of how search works, and full immersion in the project. The kernel cannot be dealt with in free time; it should become the main task of an employee or department.
Search rate shows how often a given word or phrase is searched per month. There are no formal criteria for frequency separation, it all depends on the industry and profile.
For example, the phrase “buy a phone on credit” has 7764 queries per month. For the phone market, this is a mid-range request. There is something that is asked much more often: “buy a phone” - more than a million requests, a high-frequency request. And there is something that is asked much less often: "buy a phone on credit via the Internet" - only 584 queries, low-frequency.
And the phrase "buy a drilling rig" has only 577 hits, but is considered a high-frequency query. This is even less than the low-frequency query from our previous example. Why is that?
The fact is that the market for telephones and drilling rigs by piece size differs thousands of times. And the number of potential customers differs by the same amount. Therefore, what is a lot for some is very little for others. You should always look at the market size and know the approximate total number of potential customers in the region where you work.
Division of requests by relative frequency per month
High frequency. They need to be included in the meta tags of every page of the site in general, and used for general site promotion. It is extremely difficult to compete on RF requests, it's easier to just be “in trend” - it's free. Include them in the kernel anyway.
Medium frequency. These are the same high-frequency ones, but formulated a little more precisely. There is not such tough competition for them in the contextual advertising block as with HF, so they can already be used for promotion for money, if the budget allows. Such requests can already drive targeted traffic to your site.
Low frequency. The workhorse of promotion. It is the low-frequency requests that provide the bulk of the traffic when properly configured. You can freely advertise using them, optimize website pages for them, or even make new ones, if you can't do without it. Good SN consists of about 3/4 of such requests and is constantly expanding at the expense of them.
Ultra-low frequency. The rarest, but most specific requests, for example, "buy a phone at night in Tver on credit." Very few people work with them when compiling, so there is practically no competition. They have a drawback - they are really very rarely asked, and they take as much time as the rest. Therefore, it makes sense to deal with them when all the main work has already been done.
Types of requests depending on the purpose
Informational. They are used to learn something new or to get information on a topic. For example: "how to choose a banquet hall" or "what kind of laptops are there." All such requests should lead to informational sections: blog, news or collections on topics. If you see that a lot of information requests are being typed, and there is nothing to close them on the site, then this is a reason to make new sections, pages or articles.
Transactional. Transaction = Action. Buy, sell, change, receive, deliver, order and so on. Most often, such requests are closed by pages of specific products or services. If you have most of your transactional questions in high- or mid-range, reduce the frequency and refine the queries. This will allow you to accurately direct people to the right pages, and not leave them on the main page without specifics.
Others. Requests without a pronounced intent or action. "Beautiful balls" or "modeling of clay crafts" - about them it is impossible to say specifically why the person asked this. Maybe he wants to buy it. Or learn technology. Or read more on how to do it. Or he needs someone to do it for him. Unclear. With such requests, you need to work carefully and carefully clean out of trash keys.
To promote a commercial site, basically, you need to use transactional queries, and avoid informational queries - for them the search engine shows information portals, wikipedia, aggregator sites. And it is almost impossible to compete with them in promotion.
Trash keys
Sometimes the queries come up with words or phrases that are not relevant to your industry, or you just don't do it. For example, if you only make souvenirs from conifers, you probably don’t need the query “bamboo souvenirs”. It turns out that "bamboo" is a garbage element that clogs up the core and interferes with the purity of the search.
We collect such keys in a separate list, they will be useful to us for contextual advertising. We indicate them as something that you do not need to look for, and then our site will be in the search results for the request "souvenirs from a pine tree", but not for the request "souvenirs from bamboo".
We do the same throughout the kernel - we find what is not related to the profile, remove it from the SY and enter it into a separate list.
Each request consists of three parts: a qualifier, a body, and a tail.
The general principle is as follows: the body specifies the subject of the search, the specifier specifies what needs to be done with this subject, and the tail specifies the entire query.
By combining different qualifiers and query tails, you can get many keywords that fit into the core.
Build a kernel step by step from scratch
The very first thing you can do is to look through all the pages of your site and write out all the names of the goods and stable phrases of product groups. To do this, look at the titles of categories, sections and main characteristics. We will record everything in Excel, it will come in handy in the next stages.
For example, if we have a stationery store, we get the following:
Then we add characteristics to each request - we build up the "tail". To do this, we find out what properties these products have, what else can we say about them, and write them down in a separate column:
After that, we add "specifiers": action verbs that relate to our topic. If, for example, you have a store, then it will be "buy", "order", "in stock" and so on.
We collect individual phrases from this in Excel:
Collecting extensions
Let's take a look at three typical tools for collecting the kernel - two free and a paid one.
Free. We drive our phrase into it, we get a list of what looks like our request. We carefully look at it and choose what suits us. So we run everything that we got at the first stage. The work is long and tedious.
As a result, you will have a semantic core that reflects the content of your site as accurately as possible. You can already fully work with it further when promoting.
When searching for words, be guided by the region where you sell a product or service. If you do not work all over Russia, switch to the "by region" mode (just below the search bar). This will give you an accurate picture of the queries in the location you want.
Consider your request history. Demand is not static, which many people forget. For example, if you search for the query “buy flowers” at the end of January, it might seem that almost no one is interested in flowers - only a hundred or two queries. But if you search for this in early March, the picture is completely different: thousands of users are looking for this. Therefore, remember about the seasonality.
Also free, it helps to find and select keywords, predict queries and provides performance statistics.
Key Collector. The program is a real harvester that can do 90% of all the work of collecting the semantic core. But paid - almost 2,000 rubles. Searches for clues from many sources, looks at ratings and queries, and collects core analytics.
The main features of the program:
collection of key phrases;
determining the value and value of phrases;
identification of relevant pages;
Everything she knows can be done for free using several free analogs, but it will take many times longer. Automation is the strong point of this program.
As a result, you get not only a semantic core, but also full analytics and recommendations for improvement.
Removing trash keys
Now we need to clean up our kernel to make it even more efficient. To do this, we use the Key Collector (it will do this automatically), or we are looking for garbage manually in the Excel. At this stage, we need the list of unnecessary, harmful or unnecessary requests that we compiled earlier.
Trash and key removal can be automated
Grouping requests
Now, after collecting, all found queries need to be grouped. This is done so that keywords that are close to each other in meaning can be attributed to the same page, and not blurred in different ways.
To do this, we combine requests that are similar in meaning, the answers to which are given to us by the same page, and next to them we write where they refer. If there is no such page, but there are a lot of requests in the group, most likely it makes sense to create a new page or even a section on the site, where to send everyone for such requests.
An example of grouping, again, can be seen in our worksheet.
Use every automation software you can reach. This saves a lot of time on building the kernel.
Do not collect informational and transactional queries on one page.
The more low-frequency queries in the texts, the better. But do not get carried away, do not turn the text on the site into something that only a robot can understand. Remember that real people will read you too.
Periodically clean and update the kernel. Make sure that the information in the semantic core is always up-to-date and reflects the current position. Otherwise, you will spend money on what you cannot end up giving to your customers.
Remember the benefits. In your pursuit of search traffic, remember that people come from different sources and stay where they are interested. If you have an up-to-date kernel all the time and at the same time the text on the pages is written in a human, understandable and interesting language- you are doing everything right.
Finally - once again the kernel construction algorithm itself:
1.find all keywords and phrases
2.clean them from junk requests
3. we group the requests by meaning and compare them with the pages of the site.
Do you want to start promoting your site, but understand that it takes a long time to collect the semantic core? Or don't you want to understand all the nuances, but just get the result? Write to, and we will select the best option for promoting your website for you.
An article on how to compose the semantic core on your own so that your online store is in the first positions in the search results search engines... Selection process keywords- not so simple. It will take care and a relatively long time. But if you are ready to move forward and grow your business, this article is for you.It goes into detail about methods of collecting keywords, as well as which tools can help you with this.
The answer is banal - for the site to "fall in love" with search engines. And so that when users request for specific keywords, it is your resource that is given out.
And the formation of the semantic core is the first, but very important and confident step on the way to the goal!
The next step is to create a kind of skeleton, which implies distributed selected "keys" on certain pages of the site. And only after that should you move to a new level - writing and implementing articles, tags.
Note that the network contains several options for defining the concept of the semantic core (hereinafter referred to as SN).
In general, they are similar and if you summarize everything, then you can form the following: a set of keywords (as well as related phrases and forms) for website promotion. Such words accurately characterize the focus of the site, reflect the interests of users and correspond to the activities of the company.
Our article provides an example of the formation of a CY for an online bedding store. The whole process is divided into five sequential steps.
1) Collecting basic queries
In this case, we are talking about all the phrases that will correspond to the direction of the store's activities. Therefore, it is so important to think over as accurately as possible those phrases that best characterize the goods presented in the catalog.
Of course, this is sometimes difficult to do. But the right column Wordstat.Yandex will come to the rescue - it contains phrases that are most often entered by users when using the phrase you have chosen.
Watch the video on working with Wordstat (only 13 minutes)
In order to get the results, enter the desired phrase in the service line and click on the "Select" button.
In order not to copy all requests manually, we recommend using the Wordstat Helper extension, created specifically for browsers. Mozilla Firefox and Google chrome... This addition will greatly simplify the work with the selection of words. How it works - see the screenshot below.
Save the selected words in a separate document. Then brainstorm and add the phrases that you come up with.
2) How to expand the SA: three options
The first step is relatively straightforward. Although it will require attentiveness from you. But the second is active brain activity. After all, each separately selected phrase is the basis of the future group search queries on which you will be promoted.
To collect such a group, you must use:
- synonyms;
- paraphrasing.
In order not to "download" at this stage, use special applications or services. How to do this is described in detail below.
How to expand your SEO with Google Keyword Planner
We go to the thematic chapter (called the Keyword Planner) and picks those phrases that most accurately characterize the group of queries you are interested in. Do not touch other parameters and click on the "Get ..." button.
After that, just download the results.
How to extend SN using Serpstat (ex. Prodvigator)
You can also use another similar service that conducts competitor analysis. After all, competitors are the best place to get the keywords you need.
Serpstat service (ex. Prodvigator) allows you to determine exactly what key queries your competitors were using to become the leaders of search engines. Although there are other services - decide for yourself which one to use.
In order to select search queries, you need:
- enter one request;
- indicate the region of promotion you are interested in;
- click on the "Search" button;
- and when it finishes, select the "Search queries" option.
After that, click on the "Export Table" button.
How to compose the semantic core: how to expand the SN with the Key Collector / Slovoyob
Do you have a large store with a huge amount of products? In such a situation, you need a service Key Collector.
Although if you are just starting to learn the science of selecting keywords and forming a semantic core, we recommend that you pay attention to another service - with a dissonant name Slovoeb ... Its advantage is that it is completely free.
Download the application, go to Yandex.Direct settings and enter your username / password from mailbox Yandex.
After that:
- open a new project;
- click on the Data tab;
- there click on the Add phrases option;
- indicate the region of promotion you are interested in;
- enter the queries that were generated earlier.
After that, start collecting the SN from Wordstat.Yandex. For this:
- go to the "Data collection" section;
- then - you need to select the section "Batch collection of words from the left column";
- a new window will appear on the screen in front of you;
- in it - do as shown in the screenshot below;
Note that Key Collector is an excellent tool for large, large projects and with its help it is easy to organize the collection of statistical data on services that analyze the "work" of competing sites. For example, these services include the following: SEMrush, SpyWords, Serpstat (ex. Prodvigator) and many others.
3) Delete unnecessary "keys"
So, the base has been formed. The volume of collected "keys" is more than solid. But if you analyze them (in this case, just read them carefully), you will find out that not all of the collected words correspond exactly to the theme of your store. That is why “non-target” users will enter the site using them.
Such words should be deleted.
Here's another example. So, on the site you sell bedding, but in your assortment there is simply no fabric from which such underwear can be sewn. Therefore, everything related to fabrics must be removed.
By the way, a complete list of such words will have to be formed manually. No "automation" will help here. Naturally, it will take a relatively long time and in order not to miss anything, we recommend that you arrange a full-fledged brainstorming session.
Let's note the following types and types of words that will be irrelevant for online stores:
- name and mention of competing stores;
- cities and regions where you do not work and where you do not supply goods;
- all words and phrases containing "free", "old" or "used", "download", etc .;
- the name of a brand that is not represented in your store;
- "Keys" in which there are errors;
- repeating words.
Now we will tell you how to delete all the words you don't need.
Form a list
Open the Slovoeb service, select the "Data" section in it, and then go to the "Stop Words" tab and "drive" the manually selected words into it. It is interesting that you can write down words either manually or simply upload a file with them (if you have prepared one).
Thus, you will be able to quickly eliminate from your list stop words that do not correspond to the subject matter or the peculiarities of the store.
How to build a semantic core: a quick filter
You have received a kind of syllabus. Analyze it carefully and start manually deleting unnecessary words. The same Slovoeb service will help you optimize the solution to this problem. Here is the sequence of steps you need to follow:
- take the first unnecessary word from your list, for example, let it be the city of Kiev;
- drive it into the search (on the screen - number 1);
- mark the corresponding lines;
- clicking on them right click mouse, delete;
- press Enter in the search field to return to the original list.
Repeat the above steps as many times as necessary until you have revisited the largest word list possible.
4) How to compose a semantic core: group requests
In order to understand how to carry out the distribution of words on specific pages, you should group all the queries you have selected. For this, the so-called semantic clusters should be formed.
This concept means a group of “keys” similar in subject matter and meaning, which is formed in the form of a multilevel structure. Let's say the first-level cluster is the search query "bedding". But the second-level clusters will be search queries "blankets", "blankets" and the like.
In most cases, the definition of clusters is carried out by brainstorming. But it is important to be well versed in the assortment, features of your product, but also take into account the way in which the structure of competitors is built.
The next thing that you need to pay special attention to is that on the last level of the cluster there should be only those requests that exactly correspond to the only need of potential customers. That is, a specific type of goods.
Here, the same Wordoeb service and the Quick filter option described above will come to your aid again. It will help you sort your search queries into specific categories.
To do this sort of sort, you need to do several simple steps... First, in the search bar of the service, enter the keyword that will be used in the name:
- categories;
- landing page, etc.
For example, it could be a brand of bedding. In the results obtained, mark the phrases that suit you and copy.
Those phrases that you do not need, just select with the right mouse button and delete.
On the right side of the service menu, create a new group, naming it appropriately. For example, the brand name.
To transfer your selected phrases to this part of the tab, you must select the Data line and click on the Add phrases caption. For more details, see the screen.
Pressing Enter in the search box will return you to the original word list. Follow the described procedure for all other requests.
The system will display all selected phrases in alphabetical order, which makes it easier to work with them - you can easily determine what exactly can be deleted. Or, you can group words into a specific group.
We add that manual grouping also takes a fair amount of time. Especially when it comes to too many key phrases. Therefore, we recommend using automated paid programs. These include:
- Key Collector;
- Rush-Analytics;
- Just-Magic and others.
There is also a completely free script Devaka.ru. By the way, please note that you often have to combine some types of queries.
Since there is no point in piling up a huge number of categories on the site, differing only in such names as "Beautiful bedding" and "Fashionable bedding".
To determine the importance of each individual key phrase for a particular category, you just need to transfer them to the Google planner, as shown in the screenshot.
Thus, you can determine how much a particular search query is in demand. All of them can be divided into three categories, depending on the particular use:
- high-frequency;
- low frequency;
- mid-frequency;
- and even micro-low-frequency ones.
However, it is important to understand that there are no exact numbers that indicate the belonging of a request to a particular group. Here you should focus on the topic of both the site itself and the request. In a separate case, a request with a frequency of up to 800 per month can be considered a low-frequency one. In another situation, a request with a frequency of up to 150 will be high-frequency.
The most high-frequency queries from all the selected ones will subsequently be entered into tags. But the lowest frequencies are recommended to be used in order to optimize specific store pages for them. Since there will be low competition among such queries, it will be enough to simply fill such subsections with high-quality text descriptions so that the page is in the forefront of search results.
All of the above actions will allow you to form a clear structure in which you will have:
- all the necessary and important categories - to make a visualization of the "skeleton" of your store, use the additional service XMind;
- landing pages;
- pages that provide information that is important for the user - for example, with contact information, with a description of delivery conditions, etc.
How to extend the semantic core: an alternative method
With the development of the site, the expansion of the store, the CY will also increase. For this, it is necessary to monitor and collect key phrases within each group. This greatly simplifies and speeds up the process of expanding the SA.
To collect similar queries, for hints, use additional services, including:
- Serpstat (ex. Prodvigator);
- Ubersuggest;
- Keyword Tool;
- and others.
The screenshot below shows how to use the Promoter service.
How to compose a semantic core: what to do after going through our instructions
So, in order to independently form the SN for an online store, you need to perform a number of sequential steps.
It all starts with the selection of keywords that can only be used when searching for your products and which will subsequently become the main group of queries. Further, using the tools of search engines to expand the semantic core. It is also recommended to conduct an analysis of competing sites for this.
The next steps will be like this:
- analysis of all selected search queries;
- removal of requests that do not correspond to the meaning of your store;
- grouping of requests;
- formation of the site structure;
- constant tracking of search queries and expansion of the SJ.
The method for selecting a SN for an online store presented in this article is far from the only correct and correct one. There are others. But we have tried to present you the most convenient way.
Naturally, such indicators as the quality of text descriptions, articles, tags, store structure are also important for promotion. But we will talk about this in a separate article.
In order not to miss new and useful articles, be sure to subscribe to our newsletter!
You are not undergoing training yet,? Sign up right now and in 4 days you will have your own website.
If you can't make it yourself, we'll make it for you!
(11 )
In this post, we will describe the complete algorithm for collecting the semantic core mainly for an information site, but this approach can be applied to commercial sites as well.
Initial semantics and creation of the site structure
Preparing words for parsing and initial site structure
Before we can parse words, we need to know them. Therefore, we need to draw up the initial structure of our site and the initial words for parsing (they are also called markers).
You can see the original structure and words:
1. Using logic, words from the head (if you understand the topic).
2. Your competitors, whom you analyzed when choosing niches or entering your main query.
3. From Wikipedia. It usually looks like this:
4. We look at wordstat for your main queries and the right column.
5. Other subject books and reference books.
For example, the theme of our site is heart disease. It is clear that we must have all heart diseases in our structure.
You cannot do without a medical reference. I would not look at competitors, because they may not have all the diseases, most likely they did not have time to cover them.
And your initial words for parsing will be exactly all heart diseases, and already based on the keys that we will parse, you will build the structure of the site when you start grouping them.
In addition, you can take all the medications for the treatment of the heart, like the expansion of the topic, etc. You look at Wikipedia, headings of competitors on the site, wordstat, think logically and in this way find more marker words that you will parse.
Site structure
You can look at competitors for general information, but you don't always have to make a structure like theirs. You should proceed more from your logic target audience, they also enter the queries that you parse from the search engines.
For example, how to proceed? List all heart diseases, and from them already lead symptoms, treatment. Or, nevertheless, to make headings of symptoms, treatment, and from them already lead diseases. These questions are usually addressed by grouping keywords based on search engine data. But not always, sometimes you have to make your own choices and decide how to make the structure the best, because queries can overlap.
You should always remember that the structure is created throughout the collection of semantics and sometimes in its original form it consists of several headings, and with further grouping and collection it expands, as you begin to see queries and logic. And sometimes you will be able to compose it and not parsing keywords right away, because you know the topic well or it is well represented by competitors. There is no system for drawing up the structure of the site, you can say this is your personal work.
The structure can be your individual (different from competitors), but it must necessarily be convenient for people, correspond to their logic, and therefore the logic of search engines and such that it is possible to cover all the thematic words in your niche. It should be the best and most comfortable!
Think ahead. It happens that you take a niche, and then you want to expand it, and you start changing the structure of the entire site. And the created structure on the site is very difficult and dreary to change. Ideally, you will need to change the attachment urls and re-paste all this on the site itself. In short, what a tedious and very demanding job is a tin, so immediately decide definitively in a man's way, what and how you should have it!
If you are very new to the topic of the site being created and do not know how the structure will be built, you do not know which initial words for parsing to take, then you can swap the 1st and 2nd stages of collection. That is, first parse competitors (we will analyze how to parse them below), look at their keys, based on this, compose a structure and initial words for parsing, and then parse wordstat, hints, etc.
To compose the structure, I use the mind manager - Xmind. It's free and has all the essentials.
A simple structure looks like this:
This is the structure of a commercial site. Usually, information sites do not have intersections and any filters of product cards. But this structure is not complicated either, it was compiled for the client so that he understands. Usually my structures consist of many arrows and intersections, comments - only I myself can figure out such a structure.
Is it possible to create semantics while filling the site?
If the semantics are easy, you are confident in the topic and know it, then you can do semantics in parallel with filling the site. But the initial structure must be thrown over without fail. I myself sometimes practice this in very narrow niches or in very wide ones, so as not to spend a lot of time collecting semantics, but to launch the site right away, but still I would not recommend doing this. The probability of errors is very high if you have no experience. It's easier when all the semantics are ready, the whole structure is ready, and everything is ungrouped and understandable. In addition, in the ready-made semantics, you see which keys should be prioritized, which do not have competition and will bring more visitors.
Here you also need to push away from the size of the site, if the niche is wide, then there is no point in collecting semantics, it is better to do it along the way, because collecting semantics can take a month or more.
So we threw on the structure initially or did not throw it on, we decided to go the second stage. We have a list of starting words or phrases on our topic that we can start parsing.
Parsing and working in keycollector
For parsing, of course I use keycollector. I will not dwell on setting up keycollectora, you can read the help of this program or find articles on setting up on the Internet, there are a lot of them and everything is detailed there.
When choosing sources of parsing, you should calculate your labor costs and their effectiveness. For example, if you parse the Pastukhov or MOAB database, then you will bury yourself in a heap of garbage requests that will need to be sifted out, and this time. And in my opinion, it's not worth it to find a couple of queries. On the topic of databases, there is a very interesting research from RushAnalytics, of course they praise themselves there, but if you do not pay attention to this, very interesting data on the percentage of bad keywords http://www.rush-analytics.ru/blog/analytica-istochnikov -semantiki
At the first stage, I scrape wordstat, adwords, their hints and use the Bukvarix keyword database (the desktop version is free). I also looked through the tips from Youtube manually before. But recently, keycollector added the ability to parse them, which is awesome. If you are a complete pervert, you can add other keyword bases here.
Start parsing and off you go.
Cleaning up the semantic core for an information site
We parsed the queries and we got a list of different words. It certainly contains the right words, as well as trash - empty, not thematic, not relevant, etc. Therefore, they need to be cleaned.
I do not delete unnecessary words, but move them into groups, because:
- In the future, they can become food for thought and become relevant.
- We exclude the possibility of accidental deletion of words.
- When parsing or adding new phrases, they will not be added if you check the box.
I sometimes forgot to put it, so I set up parsing in one group and parse the keys only in it, so that the collection is not duplicated:
You can work this way or the way it suits you.
Collecting frequencies
Collect all words through direct, base frequency [W] and exact [“! W”].
We collect everything that is not collected through wordstat.
Cleaning odd words and no format
We filter by one-word words, look at them and remove unnecessary ones. There are one-word words for which it makes no sense to move, they are not unambiguous or duplicate another one-word query.
For example, we have a theme - heart disease. According to the word “heart”, there is no point in advancing, it is not clear what the person means - this is a too broad and ambiguous request.
We also look at which words the frequency was not collected - this is either the words contain special characters, or there are more than 7 words in the query. We transfer them to non-format. It is unlikely that such requests are entered by people.
General and precise frequency cleaning
All words with a total frequency [W] from 0 to 1 are removed.
I also remove everything from 0 to 1 at the exact frequency ["! W"].
I distribute them to different groups.
Further in these words you can find normal logical keywords. If the kernel is small, then you can immediately manually revise all the words with zero frequency and leave, which you think people are entering. This will help to cover the topic completely and it is possible that people will follow these words. But naturally, these words should be used last, because according to them big traffic definitely won't.
The value from 0 to 1 is also taken on the basis of the topic, if there are a lot of keywords, then you can filter from 0 to 10. That is, it all depends on the breadth of your topic and your preferences.
Scrubbing by coverage
The theory is as follows: for example, there is a word - "forum", its base frequency is 8,136,416, and the exact frequency is 24,377, as we can see the difference is more than 300 times. Therefore, we can assume that this request is empty, it includes a lot of tails.
Therefore, by all accounts, I expect this KEI:
Accurate Frequency / Base Frequency * 100% = Coverage
The lower the percentage, the more likely the word is empty.
In KeyCollector, this formula looks like this:
YandexWordstatQuotePointFreq / (YandexWordstatBaseFreq + 0.01) * 100
Here, too, everything depends on the subject matter and the number of phrases in the core, so you can remove the coverage less than 5%. And where the core is large, then you can not take even 10-30%.
Implicit duplicate cleaning
To clean up implicit duplicates, we need to collect Adwords frequency by them and navigate by it, because it takes into account the word order. We save resources, so we will collect this indicator not from the entire core, but only from duplicates.
In this way, we found and marked all non-obvious duplicates. Close the tab - Implicit duplicate analysis. They checked in with us in working group... Now we will display only them, because the parameters are removed only for those phrases that are shown in the group at the moment. And only then we start parsing.
We are waiting for Adwords to take the indicators and go into the analysis of implicit duplicates.
We set these parameters for a smart group mark and press - perform a smart check. In this way, in our group of duplicates, only the highest-frequency requests for Adwords will not be marked.
It is better to run all takes, of course, and look by hand, suddenly there is something wrong. Pay special attention to groups where there are no frequency indicators, where the takes are noted randomly.
Everything that you mark in the analysis of implicit groups is also stated in the working group. So after the analysis is complete, simply close the tab and transfer all marked implicit duplicates to the appropriate folder.
Stop word cleaning
I also divide the stop words into groups. I list the cities separately. They may come in handy in the future if we decide to create a directory of organizations.
Separately, I enter the words containing the words photo, video. Suddenly they will come in handy someday.
And also, "vital queries", for example, Wikipedia, I include the forum here, as well as in the medical topic this may include - malysheva, mosquitoes, etc.
It all also depends on the topic. You can also make separate commercial inquiries - price, buy, shop.
It turns out this is a list of groups by stop words:
Cleaning up wrapped words
This applies to competitive topics, competitors often cheat them in order to mislead you. Therefore, it is necessary to collect seasonality and weed out all words with a median equal to 0.
And also, you can look at the ratio of the base frequency to the average, a large difference may also indicate a cheating request.
But one must understand that these indicators may also indicate that these are new words for which statistics have only recently appeared or they are simply seasonal.
Geo cleaning
Usually, a geo check for informational sites is not required, but just in case, I will sign this point.
If there is any doubt that some of the requests are geo-dependent, then it is better to check this through the collection of Rookee, although he sometimes makes mistakes, but much less often than checking this parameter on Yandex. Then, after collecting Rookee, it is worth checking all the words manually that were indicated as geo-dependent.
Manual cleaning
Now our core has become several times smaller. We revise it manually and remove unnecessary phrases.
At the output, we get the following groups of our kernel:
Yellow - it is worth digging, you can find words for the future.
Orange - may come in handy if we expand the site with new services.
Red - not useful.
Analysis of competition requests for information sites
Having collected the requests and cleaned them, now we need to check their competition in order to understand in the future - which requests should be dealt with in the first place.
Competition for the number of documents, title, main pages
All this can be easily removed through the KEI in the KeyCollector.
We get data for each request, how many documents were found in the search engine, in our example in Yandex. How many main pages are in the results for this request and how many occurrences of the request in the header.
On the Internet, you can find various formulas for calculating these indicators, even it seems that in the freshly installed KeyCollector, some kind of KEI calculation formula is built in according to the standard. But I don't follow them, because you have to understand that each of these factors has a different weight. For example, the most important is the presence of the main pages in the search results, then the headings and the number of documents. It is unlikely that this importance of factors, as it can be taken into account in the formula, and if it is still possible, then one cannot do without a mathematician, but then this formula will not be able to fit into the capabilities of KeyCollector.
Competition on link exchanges
It's more interesting here. Each exchange has its own algorithms for calculating competition, and it can be assumed that they take into account not only the presence of main pages in the search results, but also the age of the pages, link mass and other parameters. Basically, these exchanges, of course, are designed for commercial requests, but all the same, more or less some conclusions can be drawn from information requests.
We collect data on exchanges and display average indicators and are already guided by them.
I usually collect 2-3 exchanges. The main thing is that all requests are collected for the same exchanges and the average is displayed only for them. And not so that some requests were collected by some exchanges, and others by others and derived the average.
For a more descriptive view, you can apply the KEI formula, which will show the cost of one visitor based on the parameters of the exchanges:
KEI = AverageBudget / (AverageTraffic +0.01)
Divide the average budget for the exchanges by the average traffic forecast for the exchanges, we get the cost of one visitor based on the data of the exchanges.
Competition for mutagen
It is not in the keycollector, but it is not a hindrance. Without any problems, all words can be downloaded to Excel, and then run through the KeyCollector.
Why is Keyso better? He has a larger base compared to competitors. He has it clean, there are no phrases that are duplicated and written in a different order. For example, you will not find such repeated keys “type 1 diabetes”, “type 1 diabetes” there.
Keyso also knows how to fire sites with one counter Adsense, Analytics, Leadia, etc. You can see what other sites there are, the owner of the analyzed site. Yes, and in general for finding competitors' sites, I think this is the best solution.
How do I work with Keyso?
We take any one site of our competitor, it is better, of course, more, but not particularly critical. Because we are going to work in two iterations. We enter it into the field. We zhmakay - analyze.
We receive information on the site, we are interested in competitors here, we click to open everyone.
All competitors are opening up for us.
These are all sites that somehow overlap with our analyzed site. There will be youtube.com, otvet.mail.ru, etc., that is, large portals that write about everything. We do not need them, we need sites purely only on our topic. Therefore, we filter them according to the following criteria.
Similarity is the percentage of shared keys from the total number of a given domain.
Topics - the number of keys of our analyzed site in the keys of a competitor's domain.
Therefore, crossing these parameters will remove shared sites.
We set thematicity to 10, similarity to 4 and see what we get.
It turned out 37 competitors. But we will still check them manually, upload them to Excel and, if necessary, remove unnecessary ones.
Now go to the group report tab and enter all our competitors that we found above. We press - analyze.
We get a list of the keywords of these all sites. But we have not yet fully disclosed the topic. Therefore, we are moving to the competitors of the group.
And now we get all the competitors, all those sites that we have introduced. There are several times more of them, and there are also many general thematic ones. Let's filter them by similarity, let's say 30.
We get 841 competitors.
Here we can see how many pages this site has, traffic and draw conclusions, which competitor is the most effective.
We export all of them to Excel. We go over our hands and leave only the competitors of our niche, you can mark the most effective comrades in order to evaluate them later and see what chips they have on the site, requests that give a lot of traffic.
Now we go back to the group report and add all the competitors we have found and get a list of keywords.
Here we can filter the list at once by “! Wordstat” More than 10.
These are our requests, now we can add them to the KeyCollector and indicate that phrases that are already in any other KeyCollector group are not added.
Now we clean up our keys, and expand, group our semantic core.
Semantic core collection services
There are many organizations in this industry that are ready to offer you clustering services. For example, if you are not ready to spend time learning the intricacies of clustering on your own and doing it yourself, then you can find many specialists who are ready to do this work.
Yadrex
One of the first on the market to use artificial intelligence to create a sematic core. The head of the company is himself a professional webmaster and specialist in SEO technologies, so he guarantees the quality of his employees' work.
In addition, you can call the indicated numbers to get answers to all your questions regarding work.
When ordering services, you will receive a file that will indicate the core content groups and its structure. Additionally, you get a structure in mindmup.
The cost of work varies depending on the volume, the larger the volume of work, the cheaper the cost of one key. The maximum cost for an information project will be 2.9 rubles per key. For the one selling 4.9 rubles per key. Discounts and bonuses are provided for large orders.
Conclusion
This completes the creation of the semantic core for the information site.
I advise you to monitor the history of changes in the KeyCollector program, because it is constantly supplemented with new tools, for example, youtube was recently added for parsing. With the help of new tools, you can further expand your semantic core.
In 2008, I created my first internet project.
It was an online electronics store that needed promotion.
Initially, he handed over the promotion work to the programmers who created it.
What to promote?
They made a list of keys in 5 minutes: mobile phones, camcorders, cameras, iPhones, Samsungs - all categories and products on the site.
These were general names that did not at all resemble a properly composed semantic core.
A long period passed without results.
Incomprehensible reports forced to look for performers specializing in website promotion.
I found a local company, entrusted them with the project, but here it’s all to no avail.
Then the understanding came that real professionals should be engaged in promotion.
After rereading many reviews, I found one of the best freelancers, who assured him of success.
Six months later, there are no results again.
It was the lack of organic results over the past two years that led me to SEO.
Subsequently, this became the main calling.
Now I understand what was wrong in my initial promotion.
These mistakes are repeated by the bulk of even experienced SEO specialists who have spent more than one year on website promotion.
The blunders were wrong work with keywords.
In fact, there was no understanding of what we were promoting.
No time to collect the semantic core, fill out your contact details quickly.
Organic search is the most effective source of targeted traffic. To use it, you need to make the site interesting and visible to users of the search engines Yandex and Google. There is no need to reinvent the wheel here: it is enough to define what the audience of your project is interested in and how they seek information. This task is solved when building a semantic core.
Semantic core- a set of words and phrases that reflect the topic and structure of the site. Semantics- a section of linguistics that studies the semantic content of language units. Therefore, the terms "semantic core" and "semantic core" are identical. Remember this line, it will not let you slide into keyword stuffing or stuffing content with keywords.
Composing the semantic core, you answer the global question: what information can be found on the site. Since one of the main principles of business and marketing is customer focus, you can look at the creation of the semantic core from the other side. You need to determine what search terms users use to search for information that will be published on the site.
The construction of a core of meaning solves another problem. We are talking about the distribution of search phrases across the pages of the resource. By working with the engine, you determine which page will respond best to a particular search query or group of queries.
There are two approaches to solving this problem.
- The first assumes creation of the site structure based on the results of the analysis of the user's search queries... In this case, the semantic core defines the framework and architecture of the resource.
- The second approach involves preliminary planning of the resource structure before analyzing search queries... In this case, the semantic core is distributed over the finished framework.
Both approaches work in one way or another. But it is more logical to first plan the structure of the site, and then determine the requests by which users will be able to find this or that page. In this case, you remain proactive: you yourself choose what you want to tell potential customers. If you match the resource structure to the keys, then you remain an object and react to the environment, rather than actively changing it.
The difference between SEO and marketing approaches to building a core needs to be clearly emphasized here. Here's the logic of a typical old-school SEO: to build a website, you need to find keywords and select phrases that will just get to the top of the results. After that, you need to create a site structure and distribute the keys across the pages. The page content needs to be optimized for key phrases.
This is the logic of a businessman or a marketer: you need to decide what information to broadcast to the audience using the site. To do this, you need to know your industry and business well. First, you need to plan a rough site structure and a preliminary list of pages. After that, when building a semantic core, you need to find out how the audience is looking for information. With the help of content, you need to answer the questions that the audience asks.
What are the negative consequences of using the "SEO" approach in practice? Due to the development according to the principle of "dancing from the stove", the information value of the resource decreases. The business must shape trends and choose what to say to customers. A business should not limit itself to reactions to the statistics of search phrases and create pages only for the sake of optimizing the site for some key.
The planned result of building a semantic core is a list of key queries distributed across the pages of the site. It contains page URLs, search queries and an indication of their frequency.
How to build a site structure
The site structure is a hierarchical page layout. With its help, you solve several problems: you plan the information policy and the logic of information presentation, ensure the usability of the resource, and ensure that the site meets the requirements of search engines.
To build a structure, use a convenient tool: spreadsheet editors, Word or other software. You can also draw the structure on a piece of paper.
When planning your hierarchy, answer two questions:
- What information do you want to communicate to users?
- Where should this or that information block be published?
Imagine planning a site structure for a small pastry shop. The resource includes information pages, a publications section, and a showcase or product catalog. Visually, the structure might look like this:
For further work with the semantic core, arrange the site structure in the form of a table. In it, indicate the names of the pages and indicate their subordination. Also include columns in the table for page URLs, keywords, and frequency. The table might look like this:
You will fill in the URL, Keys and Frequency columns later. Now go to search for keywords.
What you need to know about keywords
To find a semantic core, you must understand what are keywords and what keys the audience is using... With this knowledge, you will be able to correctly use one of the keyword research tools.
What keys are used by the audience
Keys are words or phrases that potential customers use to find the information they need. For example, to make a cake, the user enters the query "Napoleon recipe with photo" into the search box.
Keywords are classified according to several criteria. By popularity, high-, medium- and low-frequency queries... According to various sources, search phrases are grouped as follows:
- TO low-frequency includes requests with a frequency of impressions up to 100 per month. Some experts include requests with a frequency of up to 1000 impressions in the group.
- TO mid-range includes requests with a frequency of up to 1000 impressions. Sometimes experts increase the threshold to 5,000 impressions.
- TO high frequency queries include phrases with a frequency of 1000 impressions. Some authors consider high-frequency keys with 5,000 or even 10,000 requests.
The difference in the frequency estimate is due to the different popularity of the topics. If you're building a core for an online laptop store, the phrase “buy laptop samsung»With a frequency of about 6 thousand per month will be medium-frequency. If you are building a core for a sports club site, the request for "aikido section" with a frequency of about 1000 requests will be high-frequency.
What do you need to know about frequency when composing a semantic core? According to various sources, from two-thirds to four-fifths of all user requests are low-frequency. Therefore, you need to build as broad a semantic core as possible. In practice, it must constantly expand with low-frequency phrases.
Does this mean that high and medium frequency requests can be ignored? No, you can't do without them. But consider low-frequency keys as the main resource for attracting targeted visitors.
According to the needs of users, the keys are combined into the following groups:
- Information... The audience uses them to find information. Examples of information requests: "how to store baked goods correctly", "how to separate the yolk from the protein."
- Transactional... Users enter them when they plan to take an action. This group includes the keys "buy a bread maker", "download a recipe book", "order pizza with delivery."
- Other inquiries... These are key phrases for which it is difficult to determine the user's intent. For example, when a person uses the "cake" key, they may be planning to buy a culinary product or prepare one themselves. In addition, the user may be interested in information about the cakes: definition, features, classification, etc.
Some experts distinguish navigation queries into a separate group. With their help, the audience searches for information on specific sites. Here are some examples: "laptops connected", "city express track delivery", "sign up for LinkedIn." Navigation queries that are not specific to your business can be ignored when compiling the semantic core.
How to use this classification method when constructing a semantic core? First, you must consider the needs of your audience when distributing keys across pages and creating your content plan. Everything is obvious here: publications of informational sections should respond to information requests. There should also be most of the key phrases without express intention. Transactional questions should be answered by pages from the "Store" or "Showcase" sections.
Second, remember that many transactional issues are commercial. To attract natural traffic for the request “buy a Samsung smartphone”, you will have to compete with Euroset, Eldorado and other business heavyweights. To avoid unequal competition, you can use the advice above. Expand your kernel as much as possible and reduce the request rate. For example, the frequency of the request “buy a Samsung Galaxy s6 smartphone” is an order of magnitude lower than the frequency of the “Buy a Samsung Galaxy smartphone” key.
What you need to know about the anatomy of search queries
Search phrases consist of several parts: body, specifier and tail... This can be seen with an example.
What about the query "cake"? It cannot be used to determine the user's intent. It is high-frequency, which determines the high competition in the search results. Using this request for promotion will bring a large proportion of non-targeted traffic, which negatively affects behavioral metrics. The high frequency and non-specificity of the “cake” request is determined by its anatomy: it consists only of the body.
Pay attention to the request "buy a cake". It consists of the body "cake" and the qualifier "buy". The latter determines the user's intent. Specifiers indicate that the key belongs to transactional or informational. Take a look at examples:
- Buy a cake.
- Cake recipes.
- How to serve a cake.
Sometimes specifiers can express exactly the opposite intent of the user. A simple example: users are planning to buy or sell a car.
Now look at the query "buy cake with delivery". It consists of a body, a specifier, and a tail. The latter does not change, but details the intention or informational need of the user. Take a look at examples:
- Buy cake online.
- Buy a cake in Tula with delivery.
- Buy homemade cake in Oryol.
In each case, the intention of the person to purchase the cake is visible. And the tail of the key phrase details this need.
Knowledge of the anatomy of search phrases allows you to derive a conditional formula for the selection of keys for the semantic core. You must define basic terms related to your business, product, and user needs. For example, customers of a confectionery firm are interested in cakes, pastries, cookies, pastries, cupcakes and other confectionery products.
After that, you need to find the tails and specifiers that the project audience uses with the basic terms. With tailed phrases, you simultaneously increase reach and reduce core competitiveness.
Long tail or long tail is a term that defines the strategy for promoting a resource for low-frequency keywords. It consists in using the maximum number of keys with a low contention level. Low-frequency promotion ensures high efficiency of marketing campaigns. This is due to the following factors:
- Promotion by low-frequency keywords requires less effort compared to promotion by high-frequency competitive requests.
- Working with long-tail queries is guaranteed to bring results, although marketers cannot always predict exactly which keywords will generate traffic. When dealing with high-volume queries, decent marketers cannot guarantee results.
- Low-frequency devices provide a higher specificity of the output results to the needs of users.
For large sites, the semantic core can contain tens of thousands of requests, and it is almost impossible to select and correctly group them by hand.
Services for compiling the semantic core
There are quite a few keyword research tools out there. You can build a kernel using paid or free services and programs. Choose a specific remedy depending on your tasks.
Key collector
You cannot do without this tool if you are engaged in internet marketing professionally, develop several sites or compose the core for a large site. Here is a list of the main tasks that the program solves:
- Selection of keywords. Key Collector collects queries through Yandex's Wordstat.
- Parsing search suggestions.
- Clipping inappropriate search phrases with stop words.
- Filtering requests by frequency.
- Search for implicit duplicate queries.
- Definition of seasonal requests.
- Collection of statistics from third-party services and platforms: Liveinternet.ru, Metrika, Google Analytics, Google AdWords, Direct, Vkontakte and others.
- Search for pages relevant to the request.
- Search query clustering.
Key collector is a multifunctional tool that automates the operations required to build a semantic core. The program is paid. You can perform all the actions that Key Collector "knows how" with the help of alternative free tools. But for this you will have to use several services and programs.
SlovoEB
This is a free tool from the creators of Key Collector. The program collects keywords through Wordstat, determines the frequency of queries, parses search suggestions.
To use the program, in the settings, specify the username and password for the "Direct" account. Do not use your main account, as Yandex may block it for automatic requests.
Create a new project. On the "Data" tab, select the "Add phrases" option. Specify search phrases that the project audience is supposed to use to find information about products.
In the "Collect keywords and statistics" section of the menu, select the required option and run the program. For example, determine the frequency of key phrases.
The tool allows you to select keywords, as well as automatically perform some tasks related to the analysis and grouping of queries.
Keyword selection service Yandex Wordstat
To see which phrases a page is displayed for in Yandex search results, open the "Search queries" tab in the Yandex.Webmaster panel -> "Recent requests".
We see the phrases for which there were clicks or the site snippet was shown in the TOP-50 of Yandex for the last 7 days.
To view data only for the page that interests us, you need to use filters.
The possibilities of searching for additional phrases in Yandex.Webmaster are not limited to this.
Go to the "Search queries" tab -> Recommended Requests.
There may not be many phrases here, but you can find additional phrases for which the promoted page does not fall into the TOP-50.
Request history
The big disadvantage of the visibility analysis in Yandex.Webmaster, of course, is that there is data only for the last 7 days. To get around this limitation a little, you can try to supplement the list using the "Searches" tab -> "Request history".
Here you will need to select "Popular Searches".
You will receive a list of the most popular phrases from the last 3 months.
To get phrases from Google Search Console, go to the "Search traffic" tab -> "Analysis of search queries." Next, select "Impressions", "CTR", "Clicks". This will allow you to see more information that can be useful when analyzing phrases.
By default, the tab displays data for 28 days, but you can expand the range to 90 days. You can also select the desired country.
As a result, we get a list of requests, similar to the one shown in the screenshot.
New version of Search Console
Google has already made some tools available new version panels. To view requests for a page, go to the "Status" tab - > "Efficiency".
In the new version, the filters are arranged differently, but the filtering logic is preserved. I think there is no point in dwelling on this question. Of the significant differences, it is worth noting the ability to analyze data for a longer period, and not just for 90 days. A significant advantage when compared to Yandex.Webmaster (only 7 days).
Analysis services for competing websites
Competitor sites are a great source of keyword ideas. If you are interested in specific page, you can manually determine the search phrases for which it is optimized. To find the main keys, it is usually enough to read the material or check the content of the keywords meta tag in the page code. You can also use services for semantic analysis of texts, for example, Istio or Advego.
If you need to analyze the entire site, use the services of complex competitive analysis:
You can use other tools to collect key phrases as well. Here are some examples: Google Trends, WordTracker, WordStream, Ubersuggest, Topvisor... But do not rush to master all services and programs at once. If you are building the semantic core for your own small site, use free tool, for example, by the Yandex keyword selection service or by the Google planner.
How to find keywords for the semantic core
The process of selecting key phrases is combined in several stages:
- In the first, you will define the base keywords that the audience uses to search for your product or business.
- The second stage is devoted to the expansion of the semantic core.
- In the third step, you will remove inappropriate search phrases.
Defining base keys
Fill in a spreadsheet or write down general search phrases related to your business and products. Gather colleagues and brainstorm. Record all proposed ideas without discussion.
Your list will look something like this:
Most of the keys you wrote down are high in frequency and low in specificity. In order to get high-specificity mid- and low-frequency search phrases, you need to expand your core as much as possible.
Expanding the semantic core
You will accomplish this task using keyword research tools such as Wordstat. If your business has a regional binding, select the appropriate region in the settings.
Using the service for selecting key phrases, you need to analyze all the keys recorded at the previous stage.
Copy the phrases from the left column of Wordstat and paste into the table. Pay attention to the right column of Wordstat. In it, Yandex offers phrases that people used in conjunction with the main query. Depending on the content, you can immediately select the appropriate keys from the right column or copy the entire list. In the second case, inappropriate requests will be eliminated at the next stage.
And the result of this stage of work will be a list of search phrases for each basic key that you brainstormed. Lists can contain hundreds or thousands of queries.
Remove inappropriate search phrases
This is the most time consuming stage of working with the kernel. You need to manually remove inappropriate search phrases from the kernel.
Do not use frequency, concurrency, or other purely "SEO" metrics as a criterion for evaluating keys. Do you know why old-school SEOs consider certain search phrases to be junk? For example, take the "diet cake" key. The Wordstat service predicts 3 impressions for it per month in the Cherepovets region.
To promote pages for specific keywords, old school SEOs bought or rented links. By the way, some experts still use this approach. It is clear that search phrases with low frequency in most cases do not pay off the money spent on buying links.
Now look at the phrase "diet cakes" through the eyes of the average marketer. Some representatives of the CA confectionery company are really interested in such products. Therefore, the key can and should be included in the semantic core. If a pastry shop prepares appropriate foods, the phrase will come in handy in the product description section. If the company for some reason does not work with diet cakes, the key can be used as a content idea for the information section.
What phrases can be safely excluded from the list? Here are some examples:
- Keys mentioning competing brands.
- Keys that mention products or services that you do not sell or plan to sell.
- Keys with the words "inexpensive", "cheap", "at a discount" included. If you are not dumping, cut off those who love the cheap so as not to spoil the behavioral metrics.
- Duplicate keys. For example, of the three keys "custom-made cakes for a birthday", "cakes to order for the day" and "cakes to order for birth", it is enough to leave the first one.
- Keys mentioning inappropriate regions or addresses. For example, if you serve residents of the Northern District of Cherepovets, the key "cakes to order industrial district" is not suitable for you.
- Phrases entered with errors or typos. Search engines understand that the user is looking for croissants, even if they enter the key "croissants" in the search box.
After deleting the inappropriate phrases, you got a list of queries for the basic key "cakes to order". The same lists need to be drawn up for other base clues from the brainstorming phase. After that, move on to grouping key phrases.
How to group keywords and build a relevance map
The search phrases that users use to find or will find your site are combined into semantic clusters, a process called clustering search queries... These are similar groups of requests. For example, the semantic cluster "Cake" includes all key phrases associated with this word: cake recipes, ordering a cake, photo of cakes, wedding cake, etc.
Semantic cluster is a group of queries united in the sense. It is a multi-level structure. Inside the first order cluster "Cake" there are clusters of the second order "Cake recipes", "Ordering cakes", "Photos of cakes". Within the cluster of the second order "Cake Recipes" it is theoretically possible to distinguish the third order of clustering: "Recipes for cakes with mastic", "Recipes for biscuit cakes", "Recipes for shortbread cakes". The number of levels in a cluster depends on the breadth of the topic. In practice, in most topics, it is sufficient to single out second-order clusters specific to business within the first-order clusters.
In theory, a semantic cluster can have many levels.
In practice, you will have to work with clusters of the first and second levels.
Most of the first level clusters you brainstorm when you write down your basic key phrases. To do this, it is enough to understand your own business, as well as peep at the site diagram that you drew up before starting work on the semantic core.
It is very important to correctly perform clustering at the second level. Here, search phrases are changed with specifiers that indicate user intent. A simple example is the cake recipe and custom cake clusters. The first search phrases are used by people in need of information. Cluster 2 keys are used by customers who want to buy a cake.
You have identified the search phrases for the cake-to-order cluster using Wordstat and manual filtering. They must be distributed between the pages of the "Cakes" section.
For example, in the cluster there are searches for “custom-made football cakes” and “custom-made soccer cakes”.
If there is a corresponding product in the assortment of the company, it is necessary to create a corresponding page in the section "Mastic Cakes". Add it to the site structure: include the name, URL and search phrases with frequency.
Use Keyword Research or similar tools to see what other search phrases potential customers are using to find soccer-themed cakes. List the pages that are relevant to your keyword list.
In the list of cluster search phrases, mark the distributed keys in a convenient way. Distribute the remaining search phrases.
If necessary, change the structure of the site: create new sections and categories. For example, the page "Custom Paw Patrol Cakes" should go under the "Baby Cakes" section. At the same time, it can be included in the "Mastic Cakes" section.
Pay attention to two points. First, the cluster may not have matching phrases for the page you are planning to create. This can happen for a variety of reasons. For example, these include the imperfection of tools for collecting search phrases or their incorrect use, as well as the low popularity of the product.
The absence of a suitable key in the cluster is not a reason to refuse to create a page and sell a product. For example, imagine that a confectionery company sells children's cakes featuring Peppa Pig's characters. If the corresponding keys are not included in the list, clarify the needs of the audience using Wordstat or another service. In most cases, there will be suitable queries.
Secondly, even after removing unnecessary keys, search phrases may remain in the cluster that are not suitable for the created and scheduled pages. They can be ignored or used in another cluster. For example, if a pastry shop for some reason fundamentally does not sell Napoleon cake, the corresponding key phrases can be used in the Recipes section.
Search query clustering
Grouping of search queries can be done manually, in Excel or Google spreadsheets, or automated, using special applications and services.
Clustering allows you to understand how requests can be distributed across the pages of the site for their fastest and most effective promotion.
Automatic clustering or grouping of search queries of the semantic core is carried out based on the analysis of sites included in the TOP-10 results of the search engines Google and Yandex.
How automatic query grouping works: for each of the requests, the search results are among the TOP-10 sites. If there are matches among at least 4-6 of them, then the requests can be grouped to be placed on one page.
Automatic grouping is the fastest and effective method combining keywords to form an almost ready-to-use site structure.
If it is not true, from the point of view of search engine statistics, it will be impossible, alas, to form the structure of the site and distribute requests among its pages, to successfully promote the pages to the TOP!
Applications and services for automatic grouping of search queries
Among the services that automate the grouping of keywords, it is worth highlighting:
- Key Collector.
- Rush Analytics.
- TopVisor.
After distributing all the keys, you will receive a list of existing and planned site pages with URL, search phrases and frequency. What to do with them next?
What to do with the semantic core
A table with a semantic core should become a roadmap and the main source of ideas when forming:
Look: you have a list with preliminary page titles and search phrases. They define the needs of the audience. When drawing up a content plan, you just need to clarify the title of the page or publication. Include your main search term in it. This is not always the most popular key. In addition to popularity, the query in the title should best reflect the needs of the page audience.
Use the rest of the search phrases as an answer to the question "what to write about." Remember, you don't need to write all the search phrases in your newsletter or product description by all means. The content should cover the topic and answer users' questions. Note again that you need to focus on information needs, not search phrases and how they fit into the text.
Semantic core for online stores
The specificity of the preparation and clustering of semantics lies in the presence of four very important, for the subsequent, groups of pages:
- Home page.
- Pages of sections and subsections of the catalog.
- Product card pages.
- Blog article pages.
Above, we have already talked about different types of search queries: informational, transactional, commercial, navigation. For pages of sections and products of an online store, first of all, transactional ones are interesting, i.e. queries using which search engine users want to see sites where they can make a purchase.
It is necessary to start forming the core with a list of products that you are already selling or planning to sell.
For online stores:
- as " body»Requests will be made product names;
- as " specifiers"Phrases:" buy», « price», « sale», « to order», « Photo», « description», «