Computers and Technology
Consider the following code: def tryIt(b): for i in range(len(b)): b[i] = b[i] + 100 #***********MAIN************ x = [] x = [56, 78, 88] print(x) tryIt(x) print(x) [156, 178, 188] [156, 178, 188] [56, 78, 88] [156, 178, 188] [56, 78, 88] [56, 78, 88]
You have been tasked with building a URL file validator for a web crawler. A web crawler is an application that fetches a web page, extracts the URLs present in that page, and then recursively fetches new pages using the extracted URLs. The end goal of a web crawler is to collect text data, images, or other resources present in order to validate resource URLs or hyperlinks on a page. URL validators can be useful to validate if the extracted URL is a valid resource to fetch. In this scenario, you will build a URL validator that checks for supported protocols and file types.What you need to do?1. Writing detailed comments and docstrings 2. Organizing and structuring code for readability 3. URL = :///Steps for Completion Task Create two lists of strings - one list for Protocol called valid_protocols, and one list for storing File extension called valid_ftleinfo . For this take the protocol list should be restricted to http , https and ftp. The file extension list should be hrl. and docx CSV. Split an input named url, and then use the first element to see whether the protocol of the URL is in valid_protocols. Similarly, check whether the URL contains a valid file_info. Task Write the conditions to return a Boolean value of True if the URL is valid, and False if either the Protocol or the File extension is not valid. main.py + 1 def validate_url(url): 2 *****Validates the given url passed as string. 3 4 Arguments: 5 url --- String, A valid url should be of form :///6 7 Protocol = [http, https, ftp]8 Hostname = string9 Fileinfo = [.html, .csv, .docx] 10 ***11 # your code starts here. 12 13 14 15 return # return True if url is valid else False 16 17 18 if 19 name _main__': url input("Enter an Url: ") 20 print(validate_url(url)) 21 22 23 24 25