Offline Explorer bot

What is Offline Explorer?

Offline Explorer is an offline browser application developed by MetaProducts. This desktop software allows users to download entire websites or specific web pages for offline viewing. Originally called Web Downloader, Offline Explorer functions as a specialized web crawler that systematically downloads website content to a local computer.

MetaProducts released multiple versions of Offline Explorer, with documented versions including 1.4, 1.9, and 2.5. The software identifies itself in server logs with user-agent strings like Offline Explorer/2.5, making it relatively easy for website administrators to identify when this tool is accessing their content.

When running, Offline Explorer works by following links on web pages to discover and download connected content. Users can configure various download parameters including depth of link following, content filters, and scheduling options. The software creates a local copy of websites that preserves the navigation structure, allowing users to browse the content without an internet connection.

Unlike search engine crawlers that index content for public search, Offline Explorer downloads content exclusively for the individual user who initiated the download. The software offers features like password-protected website access, content filtering, and URL substitution rules to enhance the offline browsing experience.

Why is Offline Explorer crawling my site?

If you notice Offline Explorer in your server logs, it means an individual user has specifically targeted your website for offline downloading. This is not an automated web-wide crawler but rather a tool being used by someone who wants access to your content without maintaining a constant internet connection.

The crawler may visit your site repeatedly if the user has configured scheduled updates to refresh their offline copy. The frequency and scope of crawling depend entirely on how the individual user has configured their download project. Some users might download your entire site, while others might focus on specific sections or content types.

Offline Explorer's crawling is typically initiated by an end user for personal use, such as research, archiving, or accessing information in locations with limited connectivity. Unlike search engine bots, there's no central organization directing which sites to crawl—each instance represents an individual user's decision to download your content.

What is the purpose of Offline Explorer?

Offline Explorer serves users who need to access web content without a constant internet connection. Common use cases include:

  • Allowing travelers to browse websites while disconnected from the internet
  • Enabling researchers to archive and reference web content without relying on the original site's availability
  • Providing access to online resources in areas with limited, expensive, or unreliable internet connectivity
  • Creating local copies of websites for testing, development, or personal reference

The software doesn't contribute to any centralized database or service. Instead, it creates private, local copies of web content exclusively for the individual user who initiated the download. Unlike search engine crawlers that provide broader visibility benefits, Offline Explorer offers no direct value to the website being crawled beyond enabling an additional user to access the content.

How do I block Offline Explorer?

Offline Explorer respects the robots.txt protocol, allowing website administrators to control its access. To block Offline Explorer from crawling your site, add the following directives to your robots.txt file:

User-agent: Offline Explorer
Disallow: /

This configuration specifically targets Offline Explorer. If you want to block all offline browsers and similar downloading tools, you might consider a more comprehensive approach:

User-agent: Offline Explorer
Disallow: /

User-agent: WebCopier
Disallow: /

User-agent: HTTrack
Disallow: /

Beyond robots.txt, you can implement server-side controls that detect and block the Offline Explorer user agent. This approach may be necessary if you notice the tool isn't respecting your robots.txt directives. Some website owners also implement rate limiting to prevent aggressive downloading that could impact server performance.

Keep in mind that blocking Offline Explorer prevents legitimate users from creating offline copies of your site, which might be valuable in situations with limited connectivity. However, if you're concerned about bandwidth usage, server load, or unauthorized copying of your content, blocking may be appropriate. For subscription-based or access-controlled websites, additional authentication measures are recommended since offline browsers can sometimes be configured to access password-protected content.

Something incorrect or have feedback?
Share feedback
Offline Explorer bot logo

Operated by

Content archiver

Documentation

Go to docs

AI model training

Not used to train AI or LLMs

Acts on behalf of user

Yes, behavior is triggered by a real user action

Obeys directives

Yes, obeys robots.txt rules

User Agent

Offline Explorer/2.5