In the continuing vein of updating/refreshing my older python posts for Python 3,
I have outlined the changes necessary to test for open TCP ports using Python 3.
My original post showed you how to open a socket connection to a host:port to see if it was active and accepting connections. Luckily, this time around I didn’t have to change much of anything. Turns out the only missing links were my print statements. As I mentioned in my last post, Python3 has turned the print statement into a function.
I also added some slightly better error handling to the example. If a connection fails, you can now see the cause of the failure.
Things to remember:
You can use an ip or hostname for the host variable value.
You can test UDP sockets by changing socket.SOCK_STREAM to socket.SOCK_DGRAM.
#Simply change the host and port values
host = '127.0.0.1'
port = 80
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print("Success connecting to ")
print(host," on port: ",str(port))
except socket.error as e:
print("Cannot connect to ")
print(host," on port: ",str(port))
As always, I appreciate any feedback or modifications that would make this example more useful or easy to understand.
This post is inspired by my previous post on utilizing urllib2 to download a sequence of files programatically. As you probably know, the transition from Python2 to Python3 has left many people struggling to port their code, so I thought I would re-hash some of my old posts and provide Python3 versions of my code examples. One resource I found recently that really helped me is the online version of Mark Pilgrim’s “Dive into Python3”, specifically the chapter on porting your 2.x code to Python3.
The example provided below outlines how to use the urllib library included within Python3 to download a sequence of image files along with comments to describe what is going on.
#import urllib request
#import urllib error handling
from urllib.error import HTTPError,URLError
#function that downloads a file
#create the url
url = base_url + file_name
# Open the url
f = urllib.request.urlopen(url)
print("downloading ", url)
# Open our local file for writing
local_file = open(file_name, "w" + file_mode)
#Write to our local file
except HTTPError as e:
print("HTTP Error:",e.code , url)
except URLError as e:
print("URL Error:",e.reason , url)
# Set the range of images to 1-50.It says 51 because the
# range function never gets to the endpoint.
image_range = list(range(1,51))
# Iterate over image range
for index in image_range:
base_url = 'http ://www.techniqal.com/'
#create file name based on known pattern
file_name = str(index) + ".jpg"
# Now download the image. If these were text files,
# or other ascii types, just pass an empty string
# for the second param ala stealStuff(file_name,'',base_url)
The key things to learn about converting my old example to the new are outlined below. This was a learning exercise for me, and will hopefully provide enough context for you to understand how to port your own code to Python3.
There are obvious changes on how to use Urllib vs the old Urllib2 methods. Take a peek at “Dive into Python3” for more details. He does a much better job describing it than I ever could.
Print statements are now called as a function.
print "My Variable is equal to " + myVariable
print("My Variable is equal to ", myVariable)
Except blocks are handled differently when using a try/except.
except HTTPError as e:
print("HTTP Error:",e.code , url)
The range() function used to return a list , but now returns an iterator object. If you still want to get a list from the range function, see below.
myRangeList = range(1,100)
myRangeList = list(range(1,100))
I’m not a software engineer by trade, so please excuse any syntax oddities. I appreciate any feedback, or more graceful ways to write this code. Leave them in the comments and I’ll happily update my example.
In light of the recent confusion about the future of del.icio.us and other Yahoo services such as Mybloglog and Flickr , I immediately exported all of my bookmarks from delicious that I’ve accumulated since August of 2004.
Want to do the same? Go to their site, sign in with your username and password, and save the XML file locally, or use their export tool to export to html for a browser importable format.
The next big question that came to me was “What now?”. I’m not a big fan of the other “cloud” based services like Xmarks, so that wasn’t really an option for me. Although I love that Firefox and Chrome can natively synch bookmarks between browser instances, the organizational structure isn’t great. Anyone with development chops immediately considers rolling their own, and then realizes that they used del.icio.us because they didn’t have the time to start from scratch. For me, this meant looking for web-based packages that allowed you to self host the service.
I came across 2 different open source packages that were both easy to implement and supported searching bookmarks, and tagging. The first is a perl/mysql package called Insipid by Luke Reeves. The second is a php/mysql package called Scuttle by Marcus Campbell. I installed and configured each and I’m going to use them in tandem for a while to see which I prefer. Here is what I have found so far.
Insipid is packaged as a Perl CGI and utilizes Mysql for storage. It isn’t actively developed, but works great and once installed, is really easy to use.
Firefox 3 compatible plugin + bookmarklet support
Import from del.icio.us
Snapshot function caches and stores bookmarked content
Only supports single user(could be a pro depending on use case)
Required enabling ExecCGI in apache
Installing perl library requirements wouldn’t be easy for most.
Scuttle: Demo URL
Info: Scuttle is a good looking package written in PHP and uses Mysql for storage. It has been updated as recently as March of 2010, and is generally more aesthetically pleasing than Insipid.
PHP makes it easier to implement
Multi-user support and bookmark sharing with other users
In-page media support for audio, documents,images, and video
Firefox plugin doesn’t work in FF3
No supporting documentation describing features and functionality
I’m going to continue to play around with these and see which best integrates into my daily browsing routine. I’ll also continue to wait and watch until Yahoo either decides to sell, close, or open source delicious. I’m open to hearing any other suggestions for good alternatives. Do you use any other web based bookmarking tools that you love and just can’t live without?
If you are an avid reader of my blog , you are well aware of the fact that it has been ~6 months since I have posted anything here. To the 3 of you , I apologize .(Hi Mom and Dad and anonymous reader from Turkey).
Seriously though. I am stuck. I know I need to stay on top of this for both professional and personal reasons(IE I enjoy it), but I just can’t find that one thing to push me over the edge. Maybe it’s time to rethink the scope of my blog? Maybe I should make it more personal? Or go the other way, and talk more about work and technology. The stock answer, is to always write about what you are passionate about. But I’m not sure that fits for me. I just need something to break me out of my slump.
What do you do to break the proverbial writers block? I appreciate any feedback.
Or, I guess I can continue to write “emo” posts whining about the fact that I don’t have anything to write about.
Hmmmmm… maybe that’s not such a bad idea.
Every AJAX app has them. Whether they are necessary in every case(or ever) is debatable . Preloaders.net has them in 3d, 2d, and allows you to show the world that your app is making progress . Brevity aside, they have a decent selection of configurable loading animations that are easy to export to .gif . You can configure background and foreground color, as well as animation speed and size. Definitely a site worth bookmarking for the 1 or 2 times you would ever need it.
I previously posted about another similar site HERE .
One of the great things about Firebug is the pluggable format that allows other developers to add features and functionality to firebug. I use two firebug addons that provide their own unique value, and have nice integration points within firebug.
The first, is YSLOW, by Yahoo. It analyzes your pages, and provides performance statistics as well as recommendations on how you can make your page/site load faster. I have been using YSLOW for about a year, and it is a great way to benchmark pages and page elements.
The newest addon that I am just now trying out is Firescope codeburner by Sitepoint. It provides context menus, and a search interface that provide you reference information on html elements, attributes and css properties. So you can tell within one panel whether the changes you are planning to make on a css element will work in IE7, or Firefox, or Opera. I am still getting used to integrating into my workflow, but thus far it seems like a great tool.
I suggest you try not only Firebug, but try extending the functionality and hence the value of an already great plugin, by installing these other addons.