robots.txt is exactly for that, crawlers like Googlebot. Its not designed for end-users' software to get a "ruling" on where, and where not to go. It's to prevent web spiders trying to dig around in links where webmasters would prefer not a search engine to find out.