Retrieving files from URLs
This script was writeen by me way long back. I documented it a little so that it’s easy to understand what it’s trying to do.
## python 3.x compliant ## author: Ayush Goel import urllib.request as ur file_url=input('Enter the file URL you want to be downloaded: ') file_name=input('Enter the path where you want the file to be saved(/enter): ') if file_name=='': ## if no location provided, we get ourselves a default one ## change the location of download as suited for you ## this one worked on my Win7 machine file_name='C:\\Users\\Ayush\\Downloads'+file_url.split('/')[-1] try: ## try to retrieve the file using the URL ur.urlretrieve(file_url,filename=file_name) except ur.URLError: ## urls like : "edoc.ub.uni-muenchen.de/7505/1/Fischer_Johannes.pdf" ## headers like http:// https:// are missing print ("The URL is parsed to be incorrect.. please provide with the complete url, including the protocol name (http,https..)") except IOError: ## the url given ain't to a file. ## It might be a forwarding URL, we would need the actual file url print("We are facing issues with the url you provided")
I have included some error issues. If you find any, comment here or PM me.