Tim Bray states, what must be, if you're past your teens, the obvious. Namely that Facebook, and by implication and according to the comments social network sites in general are just a waste of time. Who would have thought?
Feedfetcher is Google's RSS feed grabber that's been abandoned and thrown out like an old bone it seems. It leaves a log entry containing the URL http://www.google.com/feedfetcher.html. Following that address one gets redirected to Google's Webmaster Help Centre which doesn't appear to anything know about Feedfetcher. The Help Centre search function doesn't help either.
Searching Google itself helps, although the first hit redirects to the Help Centre already seen. Further down the result list is a link pointing at http://scholar.google.com/feedfetcher.html.
Following one more link one finally arrives at http://scholar.google.com/webmasters/remove.html#feedfetcher, where one learns "Since Feedfetcher requests come from explicit action by human users, Feedfetcher has been designed to ignore robots.txt guidelines. It's not possible for Google to restrict access to a publicly available feed."
And there's no way to stop Google requesting a URL that has been entered by a "human user" but which just does not exist. Solid engineering that is.
It's been all over the web: Google engineers have developed a refined image search algorithm that will, once it scales to current requirements, replace what's in use now. It'll also replace, as Searchengine Watch points out, the many hours of free labour of those who lent a helping hand with the Google Image Labeler.
Some people never understand, or accept, that it's not some HTML magic but contents regarded as remarkable by others which leads to linkpopularity. An example I've just discovered is the Meggaflash site, where the "art" is in the writing. Shame that their link page has too much dead wood.
Here's proof that getting others to whitewash your fences can pay.
URLs should really be for ever. It seems, however, that a lot of sites, even major ones, remove pages after a few months. Because I don't intent to send anyone up the garden path I've decided to retire outgoing links, depending on the target destination, after a few months. Should minimise my efforts to keep the remaining links up to date.
Microsoft announces its intention of increased spidering. Their Live Search Blog details how one can [supposedly] throttle this in case it gets out of hand. As long as Ballmer calls me and other Linux users thieves I will, however, continue to use my own robots.txt based mechanism of dealing with that issue.
If you're selling online you will be familiar with the increasing complexity of the Payment Card Industry Data Security Standard, a set of rules and regulations aiming to increase the security of online systems. Many of these are specifically designed with Windows PCs in mind and don't apply to other systems, but others, such as those governing the construction and length of validity of passwords for payment interfaces, apply to most merchants.
It's obvious they have been created by people who do not have to implement these rules, nor do they have to remember the passwords created according to these. Because there are almost a dozen limitations concerning which letter combinations are prohibited, it boils down to a genuine limit of the number of possible passwords. And because these need to be changed regularly, and can't be "similar" to the last ten passwords used, in the end people will jot them down somewhere, defeating the object of the exercise.
Microsoft's Internet Exploder is unable to render PNG graphics with transparent backgrounds correctly. There are work-arounds for this, requiring conditional comments, the execution of a script in the browser, plus use of non standard CSS extensions. This often works, but can lead to extended page rendering times.
Alternative you can use pngcrush to set a property of the PNG file to force MSIE to render the image background in the colour of your choice. At the same time pngcrush can be used to prevent browsers from adjusting Gamma values if an image and the HTML page use identical colour values, which normally should be displayed so that both have identical appearance. Adjusting gamma for one obviously leads to visible differences designers often struggle with.
If you want to fix the background colour in a PNG image, use the command
pngcrush -rem gAMA -rem cHRM -rem iCCP -rem sRGB -bkgd $1 $2 $3 infile outfile
where $1, $2 and $3 are the [decimal] colour codes to be used by IE instead of the default grey it normally uses. In- and outfile are the names of the input and output images. If you don't need to adjust the colour values and only want to prevent Gamma correction omit the bold part of the command line.
So called Extended Validation certificates are supposed to suggest that a site sporting such a certificate is more reliable and secure than ordinary SSL sites. That's why starting with Microsoft's Internet Explorer more and more browsers render the address bar with a green background when encountering an EV Cert.
Unfortunately these certificates just certify that the identity of the site owner has been checked somewhat more thorough. It does nothing for the real security, or lack thereof, of the actual web site.
Paypal is considering blocking certain browsers to improve on its anti phishing measures, forcing many people to change the user-agent string setting in their browsers' configuration. Paypal does not consider stopping to mail out tons of HTML email containing login URLs, which train people to blindly click on email links.
For $US 50 a month you can now run your own subdomain blog on a .edu domain provided you don't do anything commonly associated with the red light district, or worse. Somehow Mr Keller established a "Pickering University" in 2006 and further managed to get pi.edu registered for this very establishment - despite using a Hotmail address. And now he thinks it's payback time big time. Because there's a lot of people out there who still believe that .edu links are better than others, and who still insist that search engine staff are so ignorant they'll never notice. Maybe they don't even have internet access.
Took me a while but I think I now understand what Twitter is actually good for.
I just read that Yahoo will be introducing Slurp Version 3.0 after some upgrades to code and infrastructure. If there are still problems one is supposed to leave a message at Yahoo's Site Explorer Suggestion Board in order to reach those that are supposed to control things.
Just thinking about what must have gone on in his mind makes me sick.
Sites creating each requested page dynamically are affected most, but even sites delivering static HTML pages can suffer, because Yahoo's spider Slurp appears to have been, like most other Yahoo properties, bolted together without any concern about efficient use of resources [or others]. This is even apparent when requesting Yahoo's Spider FAQ, which is spread out over many pages, and, unfortunately, doesn't stop anywhere if one believes a number or forum posts complaining about Slurp's excessive spidering rate.
Claiming it happens to a lot of people despite the hard work you devote to prevent this type of thing from happening Google's Webmaster Central Blog presents some first aid on how to salvage a bad situation.
The main aspect elaborated is that of the site and how to prevent Google and users to reach pages as long as the server is not known to be clean. What's missing are recommendations on how to prevent this thing from happening in the first place. Considering that those confronted with the problem normally are interested only in damage limitation instead of proactive prevention it's as well Google doesn't repeat advice that's always been out there.
Was it an oversight? As John Gruber pointed out yesterday, HuddleChat, one of the first Google Apps introduced and causing immediate controversy, was a clone of Campfire, an older chat application created by a small software house out there. Later in the day he says: Even if you think it's OK to copy someone else's application feature-for-feature, the big fear for developers with something like Google App Engine is that you're trusting Google with all of your source code. Why should small indie web developers trust Google when the first example app is a Google rip-off of a small indie web app?
Seth Finkelstein shows in his blog [yet again], how some people seem to have a talent for provoking others into providing links, a behaviour traditionally called trolling. All it needs is a controversial statement and some fanboys, it seems.
Google now also lends a helping hand when creating, testing, hosting, publicising and milking online applications. Won't be long now, and all our data is online at Google, we earn our keep online through Google, and we'll be living online at Google.
And since yesterday, and that is cool, Google even points to lots of code snippets. That's as quick as checking man pages and often just enough to trigger one's memory.
There's a new CMS. I don't think the name is promising.
If you need more positive hits a reduction in precision can be helpful to achieve that goal, especially if it allows you to sell a few more ads. That's probably the reason behind Google's change in handling phrase searches if there are no results. Before, you were told so. Now Google presents results that you would have had had you not been so bold as to request exact phrase matches.
Advertisers such as Ask seem ask for this way of dealing with terms as their keyword selections aren't always thought through.
Helen Hollick is a British author. I enjoyed reading Sea Witch last year, bought at Amazon in the UK. But to follow the story I had to buy Pirate Code from amazon.com, because the title was no longer available in the UK.
Having fond memories of Bernard Cornwell's Arthur Trilogy, read in 2003, I had purchased Helen Hollick's The Kingmaking, Volume I of Pendragon's Banner Trilogy, with mixed anticipations, decided I loved it too, after about 70 pages. Yet, volumes two and three were out of stock "temporarily" when I ordered them. Tuesday evening last week I decided to order them via amazon.com, cancelling the UK order. The books arrived here on Friday, even though I selected cheapest shipping. We can still learn from the Yankees.
We get new coins. Beautiful new coins, designed by a twenty-six year old designer who never dabbled in coins.
The Americans get a new dollar bill, designed by an agency with 147 years experience.
The old digital divide (rich versus poor) still exists. But now, according to those who like to talk, there's a new one. You have a better experience on sites such as Facebook, Twitter, FriendFeed or Upcoming, the more friends that you have online.
What no-one told Scoble, is that you have an even better experience if you spend time with real people away from the keyboard.
Last month I mentioned that Google had acquired an SEO firm as part of the Doubleclick deal. Google just announced they're doing The Right Thing:
I just read that Adobe provide a free service converting PDF files into HTML or plain text. Obviously you don't need that if your site is hosted on a Unix server [or Linux, BSD, OS X or anything else except Windows], where you will find tools such as pdftohtml and also pdftotext among other conversion routines.
It's not what you know, but who you know, that matters: http://igoogleforyou.com/
I'm not sure if this is a good idea. But it is an idea. And it receives mention.
The headline Google search behind most phishing sites somehow insinuates that it's Google's fault that people fall for phishing, or that Google is somehow involved, when in reality it's ignorance by those running servers, those frequenting some of these servers and most of the hacks reporting on interweb issues.
At least the article highlights that there is a large number of known and actively exploited bugs in PHP, the mother of most shopping carts and CMS.
© Copyright 1998 - 2012 Klaus Schallhorn.