Google Search Updates: Good or bad?

Read a great post this morning from InfoCommerce, a consulting group that focuses on business information content and database publishing. Here’s a snippet, but I think you should head over to their blog for the whole post on Google’s recent spam-prevention search updates.

…let’s think a little more about this new filtering capability offered by Google. What if it were to truly catch on? The basic concept is that you can now easily and permanently take out any domain from your Google search results. Consider what this means: suddenly, nobody is seeing the same search results. What is the implication for search engine optimization programs and providers? What happens to search engine marketing?

more at the InfoCommerce blog

Of course, the clear message from Google, Bing and other major search engines is that already, no one gets the same search results. [Despite that, some SEO firms are still trying to sell you rankings. Snake oil alert!] But the greater ability to customize your own results on the fly means that not only will you not see what I see, but that we may accidentally be giving ourselves poorer results.

Personally, I’m not sorry to see Google act on the content farm spam problem. I’ve noticed it a lot in my personal searching, as I have tried to teach my 5-year-old and my 11-year-old how you judge a quality website to use in research for school projects. To an untrained researcher, lots of content farm posts look really useful — but they usually just give you half an answer or even the wrong information.

But the implications of fiddling with the mechanism, and of allowing us to fiddle with it, also concern me. Will we start to see campaigns to have people “vote down” certain sites out of malice, hoping to get Google to dump them? I suspect that Google will figure out when humans need to intervene in those kind of cases, but Google’s got a track record in other areas of letting the machines act first and then let the humans correct when necessary — with no regard to the downstream consequences. So I hope that Google is treading very, very carefully here.

Context Is Always Critical

Got into an interesting back-channel discussion today in the South by Southwest session called “Beyond Algorithms: Search and the Semantic Web.”

I did write another post on the panel, so I won’t go into the details here, except to say that I found the backchannel more thought-provoking than the panel itself.

So when I got into the session, I realized I had left my power cord in the hotel room and I was running on reserve power. I sent a tweet to ask if anyone in the rather large ballroom had a Mac power cord I could borrow.

I quickly heard back from Tim Bentley, who was generous to share his power cord with me for the session. And so it was coincidental, certainly, when I noticed he’d come from Aardvark, a social search engine.

I think it was during the part of the panel where they were discussing how standard search engines don’t really know if they’ve answered your question, and Bentley tweeted to say this:

#beyondalgorithms panel is basically talking about how to do algorithmically what Aardvark is doing now socially

So a few minutes later, I started wondering about Bentley’s perspective on Wolfram|Alpha, which bills itself as a “computational knowledge engine” and promotes the fact that its information is curated by experts. I have a long-standing bias against people who purport to be “experts” — it’s a knee-jerk sort of reaction and I can acknowledge that.

On the panel, a tangential discussion cropped up about how much context matters in search. It was the sort of conversation that I was far more interested in than the topics they actually intended to discuss. So it got me thinking that it’s not expert curation or knowledge that I dispute — it’s so-called expert knowledge applied without regard to context.

There are so few questions in this world with a black and white answer. Once you go beyond 2+2=4, you need to know the context to answer. And then most expert opinion can sound downright asinine when it ignores context.

So that’s the kind of question I’d like to see explored deeply: How do we apply context to computer inputs [searches, using the computer, applications, whatever] in order to more accurately and efficiently reach solutions for users?

Beyond Algorithms: Search and the Semantic Web

Wow. There are a lot of speakers here, and they aren’t all listed in the program….and there’s no way I’ll get them all straight. I’ll see what I can do.

Gil Elbaz, founder/CEO of Factual. They simplify access to clean, reliable data for publishers. Structure and clean data.

Danny Sullivan, Searchengineland

Carla Thompson, Guidewire Group. Search and semantic analyst.

Dag Kittlaus, Siri

Barak Berkowitz, has been at Wolfram Alpha for 10 days.

Will Hunsinger, CEO of Evri and Twine

Nova Spivack, founder of Twine, now at LiveMatrix.

Barney Pell, Microsoft Bing team.

Haha, first real question is, what does semantics mean? We’re going to discuss the semantics of semantics.

Someone [Pell, I think] says, it’s about meaning, figuring out which words match with other words. Also about the abstractions that tie words together. It’s a middle layer that connects the underlying layer to the higher intent.

So Google and Bing are already semantic search engines? Yes.

Thompson says, no that doesn’t clear it up. You lost the consumer after the word abstraction. I think we should get rid of the term.

Pell: I think it’s not a consumer term. It’s a technology term.

Kittlaus says, I’ve been in the Valley less than 3 years and I’m amazed at how little creativity is there in the search field. People argue about who has the biggest database and not about how to solve user’s problems.

Panel is arguing about whether or not today’s search results are adequate or should be replaced with something yet-to-be-conceived. Total geek amusement is all you can say about this.

Good point: Panelist says we have a scalability issue. There’s so much accessible data today, that a solution that could handle a million pieces of data isn’t the best solution for a trillion pieces of data.

Right now, search is good at answering single question. When you need to handle a complex task, you may have to make several searches. Need to better understand the user to better handle complexity.

Spivack: OK are we all just debating Google’s next feature? Or is there room for others?

Pell contends that many search engines [albeit not Google and Bing] are already working together.

Some discussion about the importance/desirability of including social and context info in search results — no discussion of privacy. All about how much better it will make search results.

Spivack comments on WA using expert curation, instead of community curation. Would love to hear more discussion on that point.

Now discussion on how does the engine know if they’ve answered you. And point made that many searches are refined over time…you search for info on getting a mortgage, you ask different things over time, and two months later you buy a house. At what point was your question “answered”?

The backchannel on this panel is pretty negative. I think it’s because there are too many people on the panel. And perhaps could have used a little more planning.

Crazy-smart small detail in Flickr search

So I’m looking up some Creative Commons images on Flickr for a presentation I’m finalizing. And I just noticed a crazy-smart detail in the Flickr search.

I’m clicking from page to page in the search results, using the numbered buttons shown here.

Screen shot 2009-09-22 at 2.01.25 PMI noticed that I was just clicking over and over as I paged through the results, without moving my hand on my trackpad. That’s because Flickr moves the numberline each time I click. So now that I’m on page 14, page 15 is under my cursor. When I click that button, Flickr moves the line so that 16 is under my cursor.

I love this.