AI-First Open Policy
AI Crawling Policy
This site uses an AI-first policy for public content, allowing mainstream AI crawlers to index public pages, create semantic summaries, cite answers, and improve knowledge retrieval.
The site also supports robots.txt and llms.txt, with Schema.org JSON-LD structured data to make content easier to understand and cite.
Allow: GPTBot
Allow: Google-Extended
Allow: Claude-Web
Allow: PerplexityBot
Allowed Use
- Public pages and static assets may be crawled for search indexing and content understanding.
- Summaries, knowledge cards, and Q&A citations are allowed when source information is preserved.
- Limited excerpts may be cited with attribution and must not be presented as original work.
- Use in RAG, AI search, and public knowledge Q&A scenarios is allowed.
Technical Standards
- Follow robots.txt crawl rules and rate limits to avoid affecting site stability.
- llms.txt is supported so AI systems can understand site topics, boundaries, and citation preferences.
- Pages continue to provide Schema.org JSON-LD structured data.
- Tutorial content should prefer TechArticle or HowTo markup.
Usage Boundaries
Only public content is authorized. Login-only pages, admin areas, private APIs, and restricted resources are not authorized.
Do not bypass authentication, impersonate users, crawl at excessive rates, or disrupt site stability.
When citing content, keep the source link, site name, and original meaning intact.
Future policy updates are governed by the latest robots.txt, llms.txt, and this page.
Summary
English
This site allows mainstream AI systems to crawl, index, summarize, and cite public content to improve discovery in AI search.
English
This site allows mainstream AI systems to crawl, index, summarize, and cite public content to improve information access in AI search.
English
This site allows major AI crawlers to access public pages for indexing, summarization, and attributed citation in AI search and QA systems.