The robots.txt can be used to restrict the allowed areas for search engine crawling.
During the Tomcat installation, you can create a robots.txt file that is used to instruct web crawlers which parts of a URL they are not allowed to access. However, this is only an instruction to the web crawler, which it can ignore.
For example, if web crawlers should be prohibited from accessing all sub pages of the URL, the robots.txt should have this content:
User agent: *
Disallow: /
After creating the robots.txt, it must be placed in the ROOT directory of Tomcat, this is:
<installation directory Tomcat>\webapps\ROOT
To check, you can call the link as shown in the picture below.