Category:I703 Python: Difference between revisions

From ICO wiki
Jump to navigationJump to search
No edit summary
Line 125: Line 125:
Exercises:
Exercises:


* Try to reduce the amount of lines:
* Try to reduce the amount of lines
** Improve the log file parsing with [https://docs.python.org/2/library/csv.html CSV reader] or [https://docs.python.org/2/library/re.html regular expressions].
** Improve the counting with [https://docs.python.org/2/library/collections.html#collections.Counter Counter object].
* Add extra functionality:
* Add extra functionality:
** What were the top 5 requested URL-s?
** What were the top 5 requested URL-s?
** What were the operating systems used to visit the URL-s?
** What were the top 5 Firefox versions used to visit the URL-s?
** Whose URL-s are the most popular? Hint: /~username/ in the beginning of the URL is college user account.
** Whose URL-s are the most popular? Hint: /~username/ in the beginning of the URL is college user account.
** How much is this user causing traffic? Hint: the response size in bytes is in the variable 'response'.
** How much is this user causing traffic? Hint: the response size in bytes is in the variable 'response'.
** What were the top 5 [https://en.wikipedia.org/wiki/HTTP_referer referrers]? Their [https://docs.python.org/2/library/urlparse.html hostnames]?


==Lecture/lab #2==
==Lecture/lab #2==
Line 201: Line 196:


Also create .gitignore file ;)
Also create .gitignore file ;)
<source lang="python">
import os
import urllib
root = "/var/log/apache2"
keywords = "Windows", "Linux", "OS X", "Ubuntu", "Googlebot", "bingbot", "Android", "YandexBot", "facebookexternalhit"
d = {} # Curly braces define empty dictionary
urls = {}
user_bytes = {}
total = 0
import gzip
for filename in os.listdir(root):
    if not filename.startswith("access.log"):
        continue
    if filename.endswith(".gz"):
        fh = gzip.open(os.path.join(root, filename))
    else:
        fh = open(os.path.join(root, filename))
    print "Parsing:", filename
    for line in fh:
        total = total + 1
        try:
            source_timestamp, request, response, referrer, _, agent, _ = line.split("\"")
            method, path, protocol = request.split(" ")
        except ValueError:
            continue # Skip garbage
           
        if path == "*": continue # Skip asterisk for path
        _, status_code, content_length, _ = response.split(" ")
        content_length = int(content_length)
        path = urllib.unquote(path)
       
        if path.startswith("/~"):
            username = path[2:].split("/")[0]
            try:
                user_bytes[username] = user_bytes[username] + content_length
            except:
                user_bytes[username] = content_length
        try:
            urls[path] = urls[path] + 1
        except:
            urls[path] = 1
       
        for keyword in keywords:
            if keyword in agent:
                try:
                    d[keyword] = d[keyword] + 1
                except KeyError:
                    d[keyword] = 1
                break
print
print("Top 5 bandwidth hoggers:")
results = user_bytes.items()
results.sort(key = lambda item:item[1], reverse=True)
for user, transferred_bytes in results[:5]:
    print user, "==>", transferred_bytes / (1024 * 1024), "MB"
   
print
print("Top 5 visited URL-s:")
results = urls.items()
results.sort(key = lambda item:item[1], reverse=True)
for path, hits in results[:5]:
    print "http://enos.itcollege.ee" + path, "==>", hits, "(", hits * 100 / total, "%)"
</source>




Line 207: Line 273:
* Combine what you've learned so far to parse all access.log files under /var/log/apache2
* Combine what you've learned so far to parse all access.log files under /var/log/apache2
* Place the source code in a GitHub repository, call it for example log-parser
* Place the source code in a GitHub repository, call it for example log-parser
* Add extra functionality:
** Improve the log file parsing with [https://docs.python.org/2/library/csv.html CSV reader] or [https://docs.python.org/2/library/re.html regular expressions].
** Improve the counting with [https://docs.python.org/2/library/collections.html#collections.Counter Counter object].
** What were the operating systems used to visit the URL-s?
** What were the top 5 Firefox versions used to visit the URL-s?
** What were the top 5 [https://en.wikipedia.org/wiki/HTTP_referer referrers]? Their [https://docs.python.org/2/library/urlparse.html hostnames]?


=Project ideas=
=Project ideas=

Revision as of 16:50, 22 February 2016

General


The Python Course is 4 ECTS

Lecturer: Lauri Võsandi

E-mail: lauri [donut] vosandi [plus] i703 [ät] gmail [dotchka] com



General


The Python Course is 4 ECTS

  • This is not a course for slacking off
  • Deduplicate work - use the same stuff for Research Project I (Projekt I) course or combine it with Web Application Programming (Võrgurakendused I).
  • I expect you to understand by now:
    • OOP concepts: loops, functions, classes etc
    • Networking fundamentals: UDP/TCP ports, logical/hardware address, hostname, domain
    • Get along on the command line: cp, mv, mkdir, cd, ssh user@host, scp file user@host:
  • Possible scenarios to pass the course:
    • Scratch your own itch, the most preferred option, should keep you motivated and happy
    • Create a local UI or agent for your PHP project's API
    • Find an (open-source mainly Python-based) project you want to help and prepare to participate on Google Summer of Code
    • Prepare scenario with some scripts for Cyberolympics competition
    • Pick something below and hope Lauri gets you a keg of beer
  • Progress visible in Git at least throughout the second half-semester
  • (Learn how to) use Google, I am not your tech support
  • Of course I am there if you're stuck with some corner case or have issues understanding some concepts :)
  • When asking for help please try to form properly phrased questions
  • Help each other, socialize, have a beer event and ask me to join as well ;)
  • If you're new to programming make sure you first follow the Python track at CodeAcademy, then continue with Learn Python the Hard Way. Videos about Python in general, pygame for game development, PyGTK for creating GUI-s.
  • If you need more practicing attend CodeClub at Mektory on Wednesdays 18:00, they usually have different exercise every week for beginners
  • If it looks like there is not much Python programming in this course then that sounds like a good conclusion - that's how Python mainly is used in real life, to glue different components together so they would bring additional value. Don't be afraid to learn other technologies ;)

Lectures/workshops


We'll have something for the first half of semester so you would be able to write a Python script that can parse input of different kind, process them and output something with added value (blog, reports, etc):

  • Hello world with Python, setting up Git repo
  • Working with text files, CSV, messing around with Unicode
  • Working with JSON, XML, Markdown files
  • Using matplotlib and charting data in general
  • Using numpy and scipy
  • Interacting with databases
  • Building networked applications
  • Threads and event loops, running apps under uwsgi, using server-side events
  • Regular expressions
  • Working with Falcon API framework
  • Working with Django web framework, ORM and templating engines
  • Network application security

These are the topics to learn if you're afraid to pick anything else below.

Lectures/labs

Lecture/lab #1

In this lecture/lab we are going to see how we can parse Apache web server log files. These log files contain information about each HTTP request that was made against the web server. Get the example input file here and check out how the file format looks like. If you are working remotely on enos.itcollege.ee you can simply refer to /var/log/apache2/access.log

Lecture recording #1, lecture recording #2

Easily readable version:

fh = open("access.log")
keywords = "Windows", "Linux", "OS X", "Ubuntu", "Googlebot", "bingbot", "Android", "YandexBot", "facebookexternalhit"
d = {} # Curly braces define empty dictionary
total = 0

for line in fh:
    total = total + 1
    try:
        source_timestamp, request, response, referrer, _, agent, _ = line.split("\"")
        method, path, protocol = request.split(" ")
        for keyword in keywords:
            if keyword in agent:
                try:
                    d[keyword] = d[keyword] + 1
                except KeyError:
                    d[keyword] = 1
                break # Stop searching for other keywords
    except ValueError:
        pass # This will do nothing, needed due to syntax

print "Total lines:", total

results = d.items()
results.sort(key = lambda item:item[1], reverse=True)
for keyword, hits in results:
    print keyword, "==>", hits, "(", hits * 100 / total, "%)"

Refined version:

fh = open("access.log")
keywords = "Windows", "Linux", "OS X", "Ubuntu", "Googlebot", "bingbot", "Android", "YandexBot", "facebookexternalhit"
d = {}

for line in fh:
    try:
        source_timestamp, request, response, referrer, _, agent, _ = line.split("\"")
        method, path, protocol = request.split(" ")
        for keyword in keywords:
            if keyword in agent:
                d[keyword] = d.get(keyword, 0) + 1
                break
    except ValueError:
        pass

total = sum(d.values())
print "Total lines with requested keywords:", total
for keyword, hits in sorted(d.items(), key = lambda (keyword,hits):-hits):
    print "%s => %d (%.02f%%)" % (keyword, hits, hits * 100 / total)

Exercises:

  • Try to reduce the amount of lines
  • Add extra functionality:
    • What were the top 5 requested URL-s?
    • Whose URL-s are the most popular? Hint: /~username/ in the beginning of the URL is college user account.
    • How much is this user causing traffic? Hint: the response size in bytes is in the variable 'response'.

Lecture/lab #2

So far we've dealed with only one file, usually you're digging through many files and you'd like to automate your work as much as possible. At enos.itcollege.ee you can find all the Apache log files under directory /var/log/apache2. Download the files to your local machine:

rsync -av username@enos.itcollege.ee:/var/log/apache2/ ~/logs/

Alternatively you just can invoke the Python on enos:

ssh username@enos.itcollege.ee
python path/to/script.py

Following simply iterates over the files in the directory and skips the unwanted ones:

import os

# Following is the directory with log files,
# On Windows substitute it where you downloaded the files
root = "/var/log/apache2"

for filename in os.listdir(root):
    if not filename.startswith("access.log"):
        print "Skipping unknown file:", filename
        continue
    if filename.endswith(".gz"):
        print "Skipping compressed file:", filename
        continue
    print "Going to process:", filename
    for line in open(os.path.join(root, filename)):
        pass # Insert magic here

You can use the gzip module to read compressed files denoted with .gz extension:

import gzip
# gzip.open will give you a file object which transparently uncompresses the file as it's read
for line in gzip.open("/var/log/apache2/access.log.1.gz"):
    print line

Set up Git, you'll have to do this on every machine you use:

git config --global user.name "$(getent passwd $USER | cut -d ":" -f 5)"
git config --global user.email $USER@itcollege.ee
git config --global core.editor gedit

Create a repository at GitHub and in your source code tree:

git init
git remote add origin git@github.com:user-name/log-parser.git
git add *.py
git commit -m "Initial commit"
git push -u origin master

Also create .gitignore file ;)


import os
import urllib
 
root = "/var/log/apache2"

keywords = "Windows", "Linux", "OS X", "Ubuntu", "Googlebot", "bingbot", "Android", "YandexBot", "facebookexternalhit"
d = {} # Curly braces define empty dictionary
urls = {}
user_bytes = {}

total = 0
import gzip
for filename in os.listdir(root):
    if not filename.startswith("access.log"):
        continue
    if filename.endswith(".gz"):
        fh = gzip.open(os.path.join(root, filename))
    else:
        fh = open(os.path.join(root, filename))
    print "Parsing:", filename
    for line in fh:
        total = total + 1
        try:
            source_timestamp, request, response, referrer, _, agent, _ = line.split("\"")
            method, path, protocol = request.split(" ")
        except ValueError:
            continue # Skip garbage
            
        if path == "*": continue # Skip asterisk for path

        _, status_code, content_length, _ = response.split(" ")
        content_length = int(content_length)
        path = urllib.unquote(path)
        
        if path.startswith("/~"):
            username = path[2:].split("/")[0]
            try:
                user_bytes[username] = user_bytes[username] + content_length
            except:
                user_bytes[username] = content_length

        try:
            urls[path] = urls[path] + 1
        except:
            urls[path] = 1
        
        for keyword in keywords:
            if keyword in agent:
                try:
                    d[keyword] = d[keyword] + 1
                except KeyError:
                    d[keyword] = 1
                break

print
print("Top 5 bandwidth hoggers:")
results = user_bytes.items()
results.sort(key = lambda item:item[1], reverse=True)
for user, transferred_bytes in results[:5]:
    print user, "==>", transferred_bytes / (1024 * 1024), "MB"
    
print
print("Top 5 visited URL-s:")
results = urls.items()
results.sort(key = lambda item:item[1], reverse=True)
for path, hits in results[:5]:
    print "http://enos.itcollege.ee" + path, "==>", hits, "(", hits * 100 / total, "%)"


Exercises:

  • Combine what you've learned so far to parse all access.log files under /var/log/apache2
  • Place the source code in a GitHub repository, call it for example log-parser
  • Add extra functionality:

Project ideas

Chat/video conferencing

WebRTC is an exciting technology built into modern web browsers, it enables peer-to-peer data transfers between browsers. WebRTC can be used to implement text-based chat, file transfers and video calls. Here one of the possible ideas is to implement something usable for a small-sized company and provide integration with Active Directory or Samba based domain controller.

  • easy: Basic user/session management
  • easy: Mobile friendly UI
  • medium: Phonebook integration via LDAP
  • medium: Single sign-on via Kerberos

Example snippet for fetching full user names over LDAP:

import ldap, ldap.sasl
l = ldap.initialize('ldap://intra.itcollege.ee', trace_level=2)
l.set_option(ldap.OPT_REFERRALS, 0)
l.sasl_interactive_bind_s('', ldap.sasl.gssapi())
r = l.search_s('dc=intra,dc=itcollege,dc=ee',ldap.SCOPE_SUBTREE,'(&(objectClass=user)(objectCategory=person))',['cn','mail'])
for dn,entry in r:
    if not dn: continue
    full_name, = entry["cn"]
    print full_name

Enhanced web server index view

It is relatively easy to configure nginx/Apache to show a fancier directory index which could be used for example to enable multimedia playback capabilities for a directory served via web. There is already some code which can be used as basis.


Pythonize robots

Current football robot software stack is written in C++ using Qt framework. With proper layering we could move it to Python while still keeping performance-sensitive stuff in C/C++ libraries such as OpenCV. This way we could more easily get newbies involved in the actual game strategy programming.

At first glance the new engine could, see preliminary example PyRobovision:

  • hardcore: engine based on event loop (epoll)
  • done: use OpenCV Python bindings for image recognition. Guide for Windows is here
  • hardcore: support loading Python scripts from files to be used for game logic
  • done: support streaming MJPEG to the web browser for debugging
  • done: support overlay of interesting scene objects in the browser
  • hardcore: support websockets to interact with a web browser
  • überhardcore: explore PyCUDA if that sounds like a viable approach
  • überhardcore: explore machine learning for certain aspects

Some of these things are of course far fetched. We can simply start with an event loop that forwards frames to a web browser and then step by step improve that. In reality it would be good enough to have something by the end of the semester that could be reused for next Robotex.

Butterknife

Butterknife is a tool for deploying Linux-based desktop OS on bare metal. It's pretty much usable, but could use some refactoring and extra features.

  • easy: Add Travis CI tests
  • easy: Add unittests
  • easy: Add automatable nightly builds for templates
  • easy: Add init subcommand for setting up Butterknife server
  • easy: Set up Butterknife server for robot firmware(s)
  • medium: Fix push/pull
  • hardcore: Online incremental upgrades and tray icon
  • hardcore: Dockerize Butterknife server

Hardcore tasks are for those who *really* want to understand how a Linux-based OS is put together. Every decent hacker has a distribution named after him/her right? ;)


Certidude

Certidude is a tool for managing (VPN) certificates and setting up services (StrongSwan, OpenVPN, Puppet?) to use those certificates. There's a lot room for experimentation and learning how different software/hardware components and technologies work together.

  • easy: Fix nchan support
  • easy: Fix Travis CI
  • easy: Add command-line features
  • easy: Add OpenVPN support, goes hand-in-hand with Windows packaging
  • easy: Add Puppet support, goes hand-in-hand with autosign for domain computers below
  • easy: Add minimal user interface with GTK or Qt bindings
  • medium: Certificate signing request retrieval from IMAP mailbox
  • medium: Certificate issue via SMTP, goes hand-in-hand with previous task
  • medium: Certificate renewal
  • medium: Add unittests
  • medium: LDAP querying for admin group membership
  • medium: Autosign for domain computers (=Kerberos authentication)
  • medium: Refactor tagging (?)
  • hardcore: Add (service+UI) packaging for Windows as MSI
  • hardcore: Add SCEP support
  • hardcore: Dockerize Certidude server

The topics discussed in this project have significant overlap with authentication/authorization and firewalls/VPN-s electives next year, so doing this kind of stuff already now makes it easier to comprehend next year ;)


Active Directory web interface

Some stuff was written for managing users in OpenLDAP database in 2014. It should be of reasonable effort to patch the code to work with MS Active Directory and Samba4. Samba python scripts can be used to talk to the domain controller. Some code for adding users by Estonian ID-code is already there. Should be doable by capable student or two. This should be easily combinable with Web Application Programming (Võrgurakendused) ;)

  • easy: Add Travis CI
  • medium: Port to AD/Samba4
  • medium: Add group management
  • medium: Add Kerberos support for authenticating users
  • medium: Check membership of domain admins group via LDAP
  • medium: One-time registration link generation, for sending account creation link to a friend
  • hardcore: Check delegation instead of group membership
  • hardcore: Dockerize Samba4 + web interface

The topics discussed in this project have significant overlap with authentication/authorization elective next year, so doing this kind of stuff already now makes it easier to comprehend next year ;)

This category currently contains no pages or media.