About the Hall-Monitor AI agent

Learn about our AI agent and how it accesses websites.

Kai Forsyth
Written by Kai Forsyth
Updated 8 days ago

Hall has developed a custom AI agent called Hall-Monitor that helps businesses understand and improve their visibility across AI-powered search platforms. Our agent operates with transparency and respect for website owners while providing valuable insights about AI search performance.

The primary functions of Hall-Monitor is to:

  • Analyze websites to identify opportunities to improve performance and visibility in AI search platforms
  • Analyze citation web pages sourced in answers and responses across various AI search platforms we monitor
  • Provide insights on how content is being referenced and utilized by AI systems

How Hall-Monitor operates

Hall-Monitor crawls publicly accessible web pages to analyze content structure, relevance, and citation patterns. Our crawler focuses on understanding how content performs in AI search contexts, helping website owners optimize for this growing search landscape.

The crawling process begins with identified URLs and follows links to build a comprehensive understanding of site content and its relationship to AI search performance. All crawling is done respectfully with appropriate delays and load management.

User agent information

Hall-Monitor can be identified by the following user agent string:

Mozilla/5.0 (compatible; Hall-Monitor/1.0; +http://usehall.com/agent)

When declaring rules in robots.txt, you should use the token Hall-Monitor.

Controlling access to your site

We provide clear, user-friendly options to control our crawler.

To completely block Hall-Monitor from accessing your site, add the following to your robots.txt file:

User-agent: Hall-Monitor
Disallow: /

To block Hall-Monitor from specific sections of your site:

User-agent: Hall-Monitor
Disallow: /private/
Disallow: /admin/

To control crawling frequency, specify a crawl delay (in seconds):

User-agent: Hall-Monitor
Crawl-Delay: 10

Policies and commitments

Respectful crawling

Hall-Monitor strictly follows robots.txt directives including disallow rules and crawl-delay settings to ensure we respect your site’s access preferences. We implement automatic load management that reduces our crawling speed when encountering server errors, helping to minimize stress on sites that may be experiencing outages or high server load. Additionally, our crawler caches common resources like images, CSS files, and scripts to minimize repeated requests and reduce bandwidth consumption on your servers.

Transparency

We believe in complete transparency about our crawling activities. Hall-Monitor uses clearly identifiable user agent strings so you can easily recognize our traffic in your server logs. Our crawling practices are fully documented and publicly available, and we maintain open communication channels for website owners who have questions or concerns about our crawler’s behavior.

Data usage

All data collected by Hall-Monitor is used solely for AI search analysis and optimization insights that benefit website owners. We respect website owner preferences and privacy considerations in all our data processing activities. Information collected through our crawling is processed in accordance with our privacy policy and applicable data protection standards, ensuring your content is handled responsibly and ethically.

More information

If you have questions, concerns, or need assistance with Hall-Monitor’s crawling behavior, please contact us.

We’re committed to being responsive to website owner concerns and will work quickly to address any issues you may encounter.

Can’t find an answer?

Get in touch with support team and we’ll help you out.