Imagine a world where even the high-up Google engineers don't know what's in the ranking algorithm. We may be moving in that direction. In today's Whiteboard Friday, Rand explores and explains the concepts of deep learning and machine learning, drawing us a picture of how they could impact our work as SEOs.
For reference, here's a still of this week's whiteboard!
Video transcription
Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are going to take a peek into Google's future and look at what it could mean as Google advances their machine learning and deep learning capabilities. I know these sound like big, fancy, important words. They're not actually that tough of topics to understand. In fact, they're simplistic enough that even a lot of technology firms like Moz do some level of machine learning. We don't do anything with deep learning and a lot of neural networks. We might be going that direction.
But I found an article that was published in January, absolutely fascinating and I think really worth reading, and I wanted to extract some of the contents here for Whiteboard Friday because I do think this is tactically and strategically important to understand for SEOs and really important for us to understand so that we can explain to our bosses, our teams, our clients how SEO works and will work in the future.
The article is called "Google Search Will Be Your Next Brain." It's by Steve Levy. It's over on Medium. I do encourage you to read it. It's a relatively lengthy read, but just a fascinating one if you're interested in search. It starts with a profile of Geoff Hinton, who was a professor in Canada and worked on neural networks for a long time and then came over to Google and is now a distinguished engineer there. As the article says, a quote from the article: "He is versed in the black art of organizing several layers of artificial neurons so that the entire system, the system of neurons, could be trained or even train itself to divine coherence from random inputs."
This sounds complex, but basically what we're saying is we're trying to get machines to come up with outcomes on their own rather than us having to tell them all the inputs to consider and how to process those incomes and the outcome to spit out. So this is essentially machine learning. Google has used this, for example, to figure out when you give it a bunch of photos and it can say, "Oh, this is a landscape photo. Oh, this is an outdoor photo. Oh, this is a photo of a person." Have you ever had that creepy experience where you upload a photo to Facebook or to Google+ and they say, "Is this your friend so and so?" And you're like, "God, that's a terrible shot of my friend. You can barely see most of his face, and he's wearing glasses which he usually never wears. How in the world could Google+ or Facebook figure out that this is this person?"
That's what they use, these neural networks, these deep machine learning processes for. So I'll give you a simple example. Here at MOZ, we do machine learning very simplistically for page authority and domain authority. We take all the inputs -- numbers of links, number of linking root domains, every single metric that you could get from MOZ on the page level, on the sub-domain level, on the root-domain level, all these metrics -- and then we combine them together and we say, "Hey machine, we want you to build us the algorithm that best correlates with how Google ranks pages, and here's a bunch of pages that Google has ranked." I think we use a base set of 10,000, and we do it about quarterly or every 6 months, feed that back into the system and the system pumps out the little algorithm that says, "Here you go. This will give you the best correlating metric with how Google ranks pages." That's how you get page authority domain authority.
Cool, really useful, helpful for us to say like, "Okay, this page is probably considered a little more important than this page by Google, and this one a lot more important." Very cool. But it's not a particularly advanced system. The more advanced system is to have these kinds of neural nets in layers. So you have a set of networks, and these neural networks, by the way, they're designed to replicate nodes in the human brain, which is in my opinion a little creepy, but don't worry. The article does talk about how there's a board of scientists who make sure Terminator 2 doesn't happen, or Terminator 1 for that matter. Apparently, no one's stopping Terminator 4 from happening? That's the new one that's coming out.
So one layer of the neural net will identify features. Another layer of the neural net might classify the types of features that are coming in. Imagine this for search results. Search results are coming in, and Google's looking at the features of all the websites and web pages, your websites and pages, to try and consider like, "What are the elements I could pull out from there?"
Well, there's the link data about it, and there are things that happen on the page. There are user interactions and all sorts of stuff. Then we're going to classify types of pages, types of searches, and then we're going to extract the features or metrics that predict the desired result, that a user gets a search result they really like. We have an algorithm that can consistently produce those, and then neural networks are hopefully designed -- that's what Geoff Hinton has been working on -- to train themselves to get better. So it's not like with PA and DA, our data scientist Matt Peters and his team looking at it and going, "I bet we could make this better by doing this."
This is standing back and the guys at Google just going, "All right machine, you learn." They figure it out. It's kind of creepy, right?
In the original system, you needed those people, these individuals here to feed the inputs, to say like, "This is what you can consider, system, and the features that we want you to extract from it."
Then unsupervised learning, which is kind of this next step, the system figures it out. So this takes us to some interesting places. Imagine the Google algorithm, circa 2005. You had basically a bunch of things in here. Maybe you'd have anchor text, PageRank and you'd have some measure of authority on a domain level. Maybe there are people who are tossing new stuff in there like, "Hey algorithm, let's consider the location of the searcher. Hey algorithm, let's consider some user and usage data." They're tossing new things into the bucket that the algorithm might consider, and then they're measuring it, seeing if it improves.
But you get to the algorithm today, and gosh there are going to be a lot of things in there that are driven by machine learning, if not deep learning yet. So there are derivatives of all of these metrics. There are conglomerations of them. There are extracted pieces like, "Hey, we only ant to look and measure anchor text on these types of results when we also see that the anchor text matches up to the search queries that have previously been performed by people who also search for this." What does that even mean? But that's what the algorithm is designed to do. The machine learning system figures out things that humans would never extract, metrics that we would never even create from the inputs that they can see.
Then, over time, the idea is that in the future even the inputs aren't given by human beings. The machine is getting to figure this stuff out itself. That's weird. That means that if you were to ask a Google engineer in a world where deep learning controls the ranking algorithm, if you were to ask the people who designed the ranking system, "Hey, does it matter if I get more links," they might be like, "Well, maybe." But they don't know, because they don't know what's in this algorithm. Only the machine knows, and the machine can't even really explain it. You could go take a snapshot and look at it, but (a) it's constantly evolving, and (b) a lot of these metrics are going to be weird conglomerations and derivatives of a bunch of metrics mashed together and torn apart and considered only when certain criteria are fulfilled. Yikes.
So what does that mean for SEOs. Like what do we have to care about from all of these systems and this evolution and this move towards deep learning, which by the way that's what Jeff Dean, who is, I think, a senior fellow over at Google, he's the dude that everyone mocks for being the world's smartest computer scientist over there, and Jeff Dean has basically said, "Hey, we want to put this into search. It's not there yet, but we want to take these models, these things that Hinton has built, and we want to put them into search." That for SEOs in the future is going to mean much less distinct universal ranking inputs, ranking factors. We won't really have ranking factors in the way that we know them today. It won't be like, "Well, they have more anchor text and so they rank higher." That might be something we'd still look at and we'd say, "Hey, they have this anchor text. Maybe that's correlated with what the machine is finding, the system is finding to be useful, and that's still something I want to care about to a certain extent."
But we're going to have to consider those things a lot more seriously. We're going to have to take another look at them and decide and determine whether the things that we thought were ranking factors still are when the neural network system takes over. It also is going to mean something that I think many, many SEOs have been predicting for a long time and have been working towards, which is more success for websites that satisfy searchers. If the output is successful searches, and that' s what the system is looking for, and that's what it's trying to correlate all its metrics to, if you produce something that means more successful searches for Google searchers when they get to your site, and you ranking in the top means Google searchers are happier, well you know what? The algorithm will catch up to you. That's kind of a nice thing. It does mean a lot less info from Google about how they rank results.
So today you might hear from someone at Google, "Well, page speed is a very small ranking factor." In the future they might be, "Well, page speed is like all ranking factors, totally unknown to us." Because the machine might say, "Well yeah, page speed as a distinct metric, one that a Google engineer could actually look at, looks very small." But derivatives of things that are connected to page speed may be huge inputs. Maybe page speed is something, that across all of these, is very well connected with happier searchers and successful search results. Weird things that we never thought of before might be connected with them as the machine learning system tries to build all those correlations, and that means potentially many more inputs into the ranking algorithm, things that we would never consider today, things we might consider wholly illogical, like, "What servers do you run on?" Well, that seems ridiculous. Why would Google ever grade you on that?
If human beings are putting factors into the algorithm, they never would. But the neural network doesn't care. It doesn't care. It's a honey badger. It doesn't care what inputs it collects. It only cares about successful searches, and so if it turns out that Ubuntu is poorly correlated with successful search results, too bad.
This world is not here yet today, but certainly there are elements of it. Google has talked about how Panda and Penguin are based off of machine learning systems like this. I think, given what Geoff Hinton and Jeff Dean are working on at Google, it sounds like this will be making its way more seriously into search and therefore it's something that we're really going to have to consider as search marketers.
All right everyone, I hope you'll join me again next week for another edition of Whiteboard Friday. Take care.
TL;DR - The Google algorithm of the future may not be recognizable even to the folks who work on it, and whether a certain input is considered and how strongly may be impossible to know. That, and things that correlate positively with positive searcher experiences (even if they're not causal or intentionally put in the algo) could be a part of it.
Some recommended resources in case you can't get enough of this stuff:
Hope y'all enjoyed it!
TL;DR potential addon -- Google may finally find a way to get rid of idiot SEOs! Great!
Hope so
Rand,
I believe that this will cause search marketers to perform more experiment based changes. For example, if I see that producing content around a particular set of keywords satisfies the algorithm, then I would most likely attempt to implement this.
It could also rid search marketing of those awful "SEOs" who claim they can "get you number one." Because they wouldn't necessarily know exactly what it takes to get to the top of rankings.
In that world, I'd love it. Experiments are too fun!
Thanks for your WBF's.
Cole
I'd certainly love to see more experiments in our field! I've been working with Eric Enge on some IMEC Labs tests of late with some cool results (e.g. Tweets do seem to get pages indexed).
It seems to be NEXT GENERATION technology for SEO. New terminology and big jargons. Neural network systems the artificial intelligence power to computers / machines - if not wrong some thing similar to Human Brains. Really interesting.....
One critical viewpoint:
Current "human-assisted" ranking factors weren't derived from nothing. The most amazing, badass, unassisted machine learning should therefore end up with at least most of the same conclusions as well-tested human-derived theories. As cool as the sci-fi applications of this are, it's really impossibly difficult still to envision a day when your code has reached the point of being smarter than you and it's time to release VERSION ULTIMATE: OUR SOFTWARE IS NOW SMARTER THAN WE ARE.
Machine learning is cool, but forever unassisted machine learning still seems a lot further off.
I see applications to this sort of thing remaining far simpler than a lot of this content seeks to make it. David Mease's co-authored paper titled Evaluating Web Search Using Task Completion Time already talks about taking the time to complete tasks, comparing results, and responding with an "alternative experiential design". Really, I see that likely as simple a process as dynamically re-weighting existing factors and re-measuring. But if an algorithm is to invent new factors - all of the minute signals that we already have of popularity, subject matter, site+content quality, user experience, are almost certainly where it still has to look. And if we care about that, it's not required for Google engineers to relinquish control in order to let a machine invent ideas.
So I personally doubt we'll see anything as crazy as the type of web server (per the video) or something still not even theorized by a human mind, ever, being what ends up mattering in our search results by any significant measure. No matter how good the technology gets.
I think we should not forget that the ideal mission Google set up for itself in relation to search is: "We want to built the Star Trek holodeck".
Ideal missions should not be considered just utopic desires, but really as a final purpose. And Google has enough money to make that utopia becoming true.
I'd like, then, to add a link to the ones Rand listed in his comment, and that clearly is related to Deep Learning, Neural Computing and Search:
Google, Stanford build an hybrid neural network that can explain photos
Totally agree with you Gianluca, next generation of search will be much more than this. We have already seen Google Now and similar apps that are even trying to cover daily things and queries in a very casual manner unlike the search in past.
Don´t forget that in the future SEO and Google will not be synonymous. All big players have their ups and downs , cycles, and even drops which are not recovered . As Rand says there are many things that Google is taking to the extreme, and this is a symptom of a decadent maturity.
See you in The Inbounder, in Valencia ;)
Cool!
Can't wait! See you in Valencia :-)
Rand, thanks for another interesting edition. When you were explaining the first part, I was literally thinking about the movie "Matrix", where everything is controlled by machines.
On a serious note, I think we SEOs should have to learn how machines actually communicates. We don't need to go in details but the thorough idea will certainly play a vital role. From the searcher perspective, this neuro pattern thing is just fabulous but seeing the Google's policies of encouraging the PPC model in search results, can we really expect something mind-blowing like this in search?
Was waiting for this kind of WBF ever since I read this great article on Aeon's magazine
https://aeon.co/magazine/technology/is-technology-m...
What this shift basically means is an even bigger highlight on quality work:
-SEO's learning more about UX
-SEO's learning more about content strategies and content creation overall
-SEO's learning more about something incrementally important ever since search first began- users, their intent, and their behaviour
Makes you push yourself harder
Google: Still in the Search series by Steven Levy on Medium:
Part 1: How Google search dealt with mobile
Part 2: How Google Knows what you want to know
Part 3: Google Search will be your next brain
Part 4: The deep mind of demis hassabis
I have also wondered if searcher intelligence might become a ranking factor that we, as human SEO, cannot currently account for in our intent to serve content to meet consumer need as well as the ranking bots.
What I mean by searcher intelligence is the idea that the pages an individual typically visits, consistently visits, how quickly someone types in the search input box (single finger pecking vs fluid type), profile data from around the web showing where you went to school, how far you got in school, your job, and on and on and on. All of these "intelligence" variables would help Google understand your expected intent and serve a different set of results than someone else who they see as less intelligent and may need easier to comprehend content. This would create a better user experience for each individual type, instead of meeting in the middle to serve both audiences.
If this was the case, as SEOs, we might focus developing content that serves different personas based on intelligence and comprehension versus just demographic and psychographic variables. For a more astute audience, we may write an in-depth article with many references to various authority sources, whereas a common audience would be served an infographic or video with much shorter content.
Thoughts?
'Deep' learning for the Day..:-)
It all seems very complex and immensely clever! From a layman's point of view, search is growing ever more complex and SEOs especially cannot simply focus on a handful of factors. Reminds me of a talk that Rand did on how SEO has changed over the last 5 years. Add machine learning and deep learning to this and it makes it 'seem' like our lives as SEOs are destined to become impossibly complex and unmanageable. I would in fact argue that the potential lack of knowledge about ranking factors mentioned in this WBF, coupled with Google's overarching mission to provide the best results in order to retain customers means that the main focus for SEOs should always be value for the user. Value seems to be one of the most talked about topics at the moment and rightly so, if you focus on how you can add value for the user then it will stand you in good stead for the increased complexity of search that is going to happen sooner or later!
Also, with the potential lack of clarity surrounding ranking metrics, I wonder if this will help reduce spam and shoddy SEO or if it will simply increase speculation and tactics that will be penalised?
On another note, computers being able to figure things out for themselves does give me the willies somewhat. As Stephen Hawking said recently, as soon as they can start to evolve without human input we could all be doomed. Let's hope that those scientists are covering every base to make sure that Skynet never happens!
I think that also geniuses like Stephen Hawking can be wrong sometimes...
True. What he says does make complete sense though. Let's hope he is wrong! Either way, it is incredibly impressive what the guys in white coats are doing both at Google and elsewhere in regards to machine learning. If not slightly intimidating.
I have a feeling people are too concerned about it. Why? Things will change for SEO's? Of course. However, to evaluate all sites equally, the machine will still needs a set of rules, ranking factors. No one will know exactly which are those ranking factors. And how is that different from how things are now? Experiments will uncover many of them. Companies will want to synchronize their tactics with SE changes. More jobs for SEO's who are prepared.
And there lay a true value of this WBF. Warns people. Prepare for the upcoming changes.
And don't worry. Web designers, SEO's, developers etc and Google + Bing already formed a kind of ecosystem. Removing any part of ecosystem is always a bad idea.
The concept of neural nets really is central to understanding Google's algorithm and the network of search engines. As players in the SEO field, it's crucial for us to step out of the box and learn how to integrate the sorts of Deep Learning Machine Processes that Rand refers to. Although there is a learning curve to understanding the entire picture, it's well worth the time and energy invested.
"more success for websites that satisfy searchers"
Nothing could be better. Focus more on the user and less on the technicalities that may may make a website rank better--at least until the next quality algorithm update. Anything that moves that direction is good.
Also, the article you mention in the beginning is really good. I, too, read it back in January and I think that Google is just way ahead of the game. I know some would criticize Google "cheerleaders" but there's no arguing that the level of innovation at Google is head and shoulders above the competition.
Thanks for sharing another Whiteboard Friday post, Couldn't understood Why people dislike this.
This reminds me of a previous blog article regarding the Evolution of Google's Algorithm (which i loved) in which the main message is to do the right things and build a cool website that answers searchers questions.
However.....Does this mean that there would be no real way to correlate success and failure with specific actions that you take?
Or....would we end up with situations where there is a mad dash for us all to move our websites to a Ubuntu server based in Dundee with and IP range of 55.112.122.xxx because the stats say that is the best place to be and everyone is still looking for a quick fix :-(
Very interesting stuff
This is totally fascinating theory. However, at some point, the engineers and scientists will discover that they no longer have answers as to how and why Google’s algorithm asserts itself. Whatever challenges marketers are facing now to keep up will only become more difficult in the future. It reminds me of the story of how Lore, of the Star Trek saga, was dismantled after he began showing signs of emotional instability and malevolence. Who knows, when the algorithm completely takes over, Google may attempt to scale the algorithm back. We may be witnessing the “Matrix” in the making.
According this assume whiteboardfriday I really tried hard to answer the fundamental question:
What can we do to deal with the rise of machine learning in the google-search-algo.
On https://www.texter-linz.at/future-of-seo.html you can read my article.
Thanks for your tips
I am wondering with new technology, if it moves in the directino we are anticipating, if searches will become very similar to the class system. It seems to the rich will get richer idea. With an machine continuing to make updates based on high performing webites, it will become more an more specific and narrow. It may prove difficult for someone new to gain power, much like the class system.
I agree with others. This seems a bit extreme. Google has also be known to stay in control and know what is going on. Would they be comfortable to lefting something else to control and make changes to their searches?
That is a very interesting topic. I wonder though, how a machine would be able to know what a user wants and how they would be pleased with a result.
I know that some metrics like bouncerate, engagement levels and other factors like TOS can tell Google how they are interacting with a given result, but it is still hard to imagine a machine only operated search service.
I will definitely look forward to seeing the improvement of machines in terms of automatic SPAM control, as well as general webspeed optimization.
Anyhow, great WBF Rand, this is a topic which will surely be a focus area on the next 5-10 years or so.
I think it's not easy to know what the future we waiting.
i know that The SEO came for fix e try help and faciliting we life be more easy and believe that the future do us more relevancy in everyone because only like this leave the internet tuned for us.
if the internet was a baby now is a teen not an adult.
Great article Rand,
One thing that you mentioned and seems to be a good discussion point in the comments, is how no one knows how Google's algorithm works (it's a mythical creature) an ever evolving formula (Terminator 4 will happen....... soon) but overall it's ingenious.
Even though we don't know how nor will we really ever have a solid knowledge of how the algorithm works, one thing we do know or need to know, is it's all about the visitor. Much like what many businesses say "the customer is always right" to help drive up revenue, we need to focus on our visitors to drive up our SEO.
Once you focus on the user and there needs and motivations to keep them coming back to your site for more, you have just encompassed 90% of the "known" algorithm.
Either way I love SEO, it always keeps your brain ticking and there is never a dull moment. Happy Friday ya'll
Great post, Rand! I also read the article you mention while back in January and it's great to see it on Moz.
I noticed marketing moving towards science in the past few years, and who knows SEOs might turn to data scientists running experiments?
Or perhaps we'll turn to our human side and become UX designers, crafting target personas and using data analysis skills to predict the visits? Interesting times ahead!
Even though we don't know what the future might hold, one thing we do know, is it's all about the visitor, and that's a good thing for sure.
Thanks for a great post!
Hi Rand, wow, Deep Learning and Machine Learning is a very interest topic. Thanks for sharing your insight.
So as Rands says .. although there is an extremely high level of complexity involved to create Deep Machine Learning process, the number one goal here is to deliver a successful search. And in order to do that we as SEOs need to think creatively and ensure that the content that is setup to address a particular keyword is completely relevant to that search term. The more relevant and rich the content is the more likely that the user is going to be 'happy' with their experience, and which consequently results from i.e more shares, clicks, time spent, conversions etc. This all points towards not just focusing on JUST technical SEO and creating forced Link building strategies but to create content 'experiences' that are user centric and are completely relevant to each of the funnels you are targeting. I think this is a very natural evolution of Machine learning and it seems that human behaviour will become a metric itself.
A comfort level with taking concrete steps in an extremely vague landscape is half of what it takes to "enjoy" SEO. Our clients ask us to take actions now that will put them in a better position in the future--oh for the good old days when we were naive about how quickly the future was coming at us. It sounds like language may become even less important in the future of search results while actions and circumstances become increasingly so. Thanks Rand, really liked that.
I think you have a good point there Simon, if no one really ever knows how the algorithm works, then we would never be able to form patterns and therefore this would eliminate the occurrence of dodgy SEO tactics significantly. People would eventually give up! .. it's genius..
Has anyone watched the film Ex Machina - Guy who invents the world's biggest search engine recruits an employee to test an AI he's just built. The amount of real world scenarios in relation to Google was freaky - left me thinking where is Matt Cutts again!
TLDR.. :( I think you should use heading and sub heading in between your content. It makes your story more interesting and easy to read for more understanding,
Great article!
Google has long been doing research in the field of deep learning and AI. The objective is to create a search engine that is capable of understanding and responding to user requests as if it were a person. It sounds like science fiction !! Although this is not new, years ago the costs of these investigations were too big. Today is cheaper and therefore more accessible for many companies.The concept of deep learning to relate a first search with a second to refine the result seems great.
Thanks for your interesting post Rand!
Rand this is a very interesting topic! I was wonder what also Ray Kurzweil would say about this. He's got a very interesting book actually in the topic of artificial intelligence, and as you know also one of the most brilliant minds in Google. In the following article he actually talks about a quantum device that detects and corrects its own errors. https://www.kurzweilai.net/a-quantum-device-that-detects-and-corrects-its-own-errors
Thanks for another great Whiteboard Friday.
I totally agree with what you're trying to achieve in the video, but think your description of unsupervised learning could be misleading.
Supervised and unsupervised learning are just different tools that lend themselves to solving different problems, unsupervised learning is not a 'next step' from supervised learning. My understanding is that supervised learning is useful when you know the outcome (e.g. these features did, or did not, result in a successful search). Unsupervised is useful when you have unlabelled data and need to find logical patterns or clusters within the data.
If successful searches is the output to optimise for, what you're describing is a problem that lends itself to supervised learning.
You also mention unsupervised learning will identify appropriate inputs by itself. This is not a characteristic unique to unsupervised learning. Being able to identify influential features within a dataset is a critical characteristic for any machine learning algorithm.
UX, UX, UX :)
Hi fellows and Rand/Moz team,
I'd like to start saying that real power lays in being able to cross massive info, not only collecting it. I think we'll all agree on this.
IMO Google has so many data inputs from people, sources and actual "happenings" in the whole world that they should be able to drive those data into the "thinking machine" so HUMAN news, events, publications, speeches, accidents, petrol prices, regional conflicts, etc become MACHINE logical operators, variables and subroutines.
This (interesting and kinda apocalyptic ;) WBF reinforced my previous feelings about what Search will be, evolving from data evaluation and retrieving to personal advise. What I mean in real life example:
- Search based on engineers brainjuice:
- Search based on machine learning:
I hope I did not a bit too much with this example, where I want to get to is the fact that Google (or whoever) will be thinking for us (that mentioned 2nd brain) and eventually advising us to detour (to Oporto? Rome? Casablanca? Girona?) due to given real time situations that would have a weight on the searcher intention.
No longer "I give you more or less specific facts to look for", welcome to the "If I was you I'd rather fly 3 weeks before and stay in any village north BCN with railway connection, and use a T10 ticket for a planned week of hop-on/hop-off public transportation"... And with some time and data collection they will be able to predict specific conditions and trends (average flight cost, chance of rain in destination, upcomming events, etc) so finally following the advise of an AI will be the wise thing to do.
Now it's when I am getting scared... Red or Blue pill guys?
Thanks for your patience if you got here :)
Do you think, that Google will rank sites better if thy will have a better bounce rate?
If Google will see that the surfers are staying a lot of time on site it will help to rank better?
Google will use analytics data to rank sites better?
Thank you sir i really loved your tutorials, i am fan of your white board classes, its help me more to improve my skills in seo... keep your updates
Taking into account the complexity involving these neural networks and its current or next implementation on Google algorithms, SEOs now can not talk about ranking factors such as Rand says. It makes sense then devote to explore this or that factor?, diluted into a big amount of factors that just the machine understand.
very interesting video and follows on from Kelvin Newman- What the Flash Crash & Black Boxes can Teach us about the Future of Search that he talked about at SearchLove London last year - how basically its becoming a black box of unknowns that in the future nobody will be able to explain what gets you to number 1.
Thanks! Great post, great links in the comments. Here is one more for all who are interested in a brief history of neural nets: The hidden story behind the code that runs our lives (The Chronical of Higher Education). Recommended.
Very cool presentation.
As Google is already a better gamer than most of us....https://www.newsledge.com/googles-ai-is-a-gamer-and...
I reckon that more than ever what will remain in SEO hands is making sure content is somehow good.
It's definitely going to make things that much more difficult when troubleshooting wonky rankings. But, as all SEOs must do, we will adapt and cross that bridge when we get there.
I first heard and learnt about AI readind the homonymous book by Jack Copeland many years ago. Ever since, had I thought that it was a promising field that never got to took off. Today I've been surprised and thrilled both by your WF and by the article you recommended. It seems that finally neuron networks have found their place.
This is a topic that I find especially fascinating as it collides with morals and phylosophy, but after all maybe they're not getting ahead of us if the first thing they learnt from YouTube is what a cat is!
Hey Rand,
the next Terminator will be the fifth installment of the series. So we should stop Terminator 5 from happening. Could a neural network help us with this task?
;-)
Google has been doing this stuff for ten years and keepin' their mouth shut about it.
i think one thing is for sure. if you wants to stay in this competitive marketing then you must need a website that fulfill your audience need. i am sure that your website will never go down if you give the audience that they want.
Machine deep learning! isn't it what we dreamed of already in the past. Now we are moving towards more of a mechanistic world and suddenly we don't like it anymore. Deep learning to me means more privacy intrusion, difficulty to breath independently and being trapped in the artificial world.
Hey Rand, thank you for an interesting WBF, they are not always interesting to me, but this one is good because it literally is about the future of search. I would recommend every SEO not to miss this one.
Since the industry is moving into this direction, I believe that we are on the edge of a big shift when today's SEOs will be transforming into Data Analysts, in order to embrace this change.
Looks like Skynet is almost here...
hallo Rand Fishkin
I have a question for you that was possible.What does the word MOZ?
thanks