These crawlers travel across the internet and discover new content by following hyperlinks in the visited domain name. Regular search engines index web content by sending robots (also known as crawlers or spiders) to discover new and updated content. However, before I begin talking about the web layers, readers should be able to differentiate between two terms that most internet users use interchangeably which are: The World Wide Web (Surface webĪlso known as the visible or clear web, the surface web is the portion of the web that can be indexed and accessed using standard search engines like Google, Yahoo! and Bing, it constitutes about 4% of web content and can be accessed using standard web browsers without using any additional software. In this article, I will discuss the different layers that form the web and describe what we expect to see on each layer. However, things are not the same with the darknet, which needs special software to access. Surface and deep web can be accessed using a regular web browser like Firefox and Chrome. This layer is the largest one in size and contains within it another hidden sub-layer which is the dark web – or darknet. The second layer of the web is the deep web. The surface web is the portion of the web that typically search engines can access and index its contents. As I already said, most internet users only access the surface web when doing their regular online tasks.