12.21.2016

Get VirtualBox VM to use host's DNS

From here and here

1. Find out vm name:
c:\Program Files\Oracle\VirtualBox> VBoxManage list runningvms

2. Turn on resolver:
c:\Program Files\Oracle\VirtualBox> VBoxManage modifyvm "VM name" --natdnshostresolver1 on

11.03.2016

Install Python 2.7 and Python 3.5 alongside Python 2.6 on CentOS 6.5

I got mostly from this post.
Install Python 2.7 and Python 3.5 alongside Python 2.6 on CentOS 6.5

1. Prep
# yum groupinstall "Development tools"
# yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel

2. Compile as shared library - add /usr/local/lib on /etc/ld.so.conf
# vi /etc/ld.so.conf
include ld.so.conf.d/*.conf
/usr/local/lib
# ldconfig

3.  Install python on  /usr/local
# Python 2.7.12:
wget http://python.org/ftp/python/2.7.12/Python-2.7.12.tar.xz
tar xf Python-2.7.12.tar.xz
cd Python-2.7.12
./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared 
make && make altinstall

# Python 3.5.2:
wget http://python.org/ftp/python/3.5.2/Python-3.5.2.tar.xz
tar xf Python-3.5.2.tar.xz
cd Python-3.5.2
./configure --prefix=/usr/local --enable-shared
make && make altinstall

4. Run ldconfig again
# ldconfig

5. Install python virtualenv
# yum install python-virtualenv

6. Create virtualenv for python2.7
$ mkdir project_home
$ cd project_home
$ virtualenv -p /usr/local/bin/python2.7 .venv2.7

7. Create virtualenv for python3.5
$ mkdir project_home
$ cd project_home
$ pyvenv-3.5 .venv3.5
(this seems to work even though I get an error message:
Unable to symlink '/usr/local/bin/python3.5' to '/auto/home/delacs/python_projects_on_perf_utils/test_project/.venv3.5/bin/python3.5')

10.26.2016

DRF JWT Authentication

i.e., Django REST Framework with JSON Web Token Authentication. Got the solution from here: http://getblimp.github.io/django-rest-framework-jwt/ http://zqpythonic.qiniucdn.com/data/20141006233346/index.html
$ pip install djangorestframework-jwt

In settings.py:
REST_FRAMEWORK = {
    'DEFAULT_PERMISSION_CLASSES': (
        'rest_framework.permissions.IsAuthenticated',
    ),
    'DEFAULT_AUTHENTICATION_CLASSES': (
        'rest_framework.authentication.BasicAuthentication',
        'rest_framework.authentication.SessionAuthentication',
        'rest_framework_jwt.authentication.JSONWebTokenAuthentication',
    ),
}

In urls.py:
from rest_framework_jwt.views import obtain_jwt_token, refresh_jwt_token

urlpatterns = patterns(
    # ...
    url(r'^api-token-auth/', obtain_jwt_token),
    url(r'^api-token-refresh/', refresh_jwt_token)
)

To test:
$ curl -X POST -d "username=admin&password=password123" http://localhost:8000/api-token-auth/
$ curl -X POST -H "Content-Type: application/json" -d '{"username":"admin","password":"password123"}' http://localhost:8000/api-token-auth/
$ curl -H "Authorization: JWT " http://localhost:8000/protected-url/

10.25.2016

Django LDAP Integration

These posts helped me: http://kacperdziubek.pl/python/django-ldap-open-directory-integration/ https://pythonhosted.org/django-auth-ldap/
Install:
$ pip install django-auth-ldap

Then in settings.py, add:
import ldap
from django_auth_ldap.config import LDAPSearch

AUTHENTICATION_BACKENDS = (
    'django.contrib.auth.backends.ModelBackend',
    'django_auth_ldap.backend.LDAPBackend',
)

AUTH_LDAP_SERVER_URI = "ldap://my.appauth.com" # ip or host name of Open Directory server
AUTH_LDAP_BIND_DN = "CN=Accounts,OU=US Security,DC=corp,DC=com"
AUTH_LDAP_BIND_PASSWORD = "MySecurePassword"
AUTH_LDAP_USER_SEARCH = LDAPSearch("OU=US Users,dc=corp,dc=com",
    ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)")
AUTH_LDAP_CONNECTION_OPTIONS = {
    # make search fast
    ldap.OPT_REFERRALS: 0
}

AUTH_LDAP_USER_ATTR_MAP = {
    "username": "sAMAccountName",
    "first_name": "givenName",
    "last_name": "sn",
    "email": "mail"
}
The mapping above would result into automatically saving the information to the users table.

10.20.2016

Django Notes

1. Backup and seed data using fixtures
# Save data
$ python manage.py dumpdata --format=json myapp > myapp/fixtures/initial_data.json
# Load data
$ python manage.py loaddata vlanapi/fixtures/initial_data.json

2. Open python shell - interactive console
$ python manage.py shell
Python 2.7.12 (default, Jul  1 2016, 15:12:24) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>

3. Start a project
$ django-admin startproject mysite

4. Start initial migration
$ python manage.py migrate

5. Start server
$ python manage.py runserver

6. Create an app
$ python manage.py startapp newapp

7. Create and run a migration
$ python manage.py makemigrations
$ python manage.py migrate

9.12.2016

Python Notes

1. Booleans
The numbers 0, 0.0, and 0+0j are all False; any other number is True.
The empty string "" is False; any other string is True.
The empty list [] is False; any other list is True.
The empty dictionary {} is False; any other dictionary is True.
The empty set set() is False; any other set is True.
The special Python value None is always False.

2. Working on REPL
Import library
>>> from autoinfra.resource import Client, Ddr, Resource
List all available methods
>>> dir(Client)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_connect_ssh', 'bring_up_eth', 'configure_ip', 'eth_filter', 'get_eths_details', 'get_eths_via_ifconfig', 'get_eths_via_ip_a', 'get_switch_port_details', 'is_connected', 'is_eth_link_detected', 'send_cmd']
Access docstring on a method
>>> help(Client.get_switch_port_details)

3. Testing
From this post.

$ pip install nose

From root of project run nosetests, this would look for any test under this directory and run it
$ pwd
/auto/home13/delacs/Documents/projects/nextgen/libs/autoinfra
$ nosetests
.....
----------------------------------------------------------------------
Ran 5 tests in 2.659s
OK

To run a single file
$ nosetests -v autoinfra/resources/tests/test_ddr.py
test_get_10g_eths (test_ddr.TestDdr) ... ok
test_se_cmd (test_ddr.TestDdr) ... ok
test_send_cmd (test_ddr.TestDdr) ... ok
----------------------------------------------------------------------
Ran 3 tests in 2.602s

To run a particular test in a file
$ nosetests -v autoinfra/switches/tests/test_cisco.py:TestCisco.test_move_to_vlan_no_nxapi_single
test_move_to_vlan_no_nxapi_single (test_cisco.TestCisco) ... ok
----------------------------------------------------------------------
Ran 1 test in 12.423s
OK

This works when test files have imports like this...
from autoinfra.resources.client import Client

4. Install python 2.7.12 on another directory (/opt/python)
# wget https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tgz
# tar zxf Python-2.7.12.tgz
# cd Python-2.7.12/
# ./configure --prefix=/opt/python
# make
# make install

5. Create a virtual environment
$ sudo yum install python-virtualenv
$ mkdir sample_project
$ cd sample_project
$ virtualenv -p /opt/python/bin/python2.7 .venv

6. Data dumper
>>> from pprint import pprint
>>> pprint(vars(Host.objects.get(name="w3-perfsds-0001")))

7.19.2016

Install TP-Link AC1200 T4U in Ubuntu 16.04

$ sudo apt-get install synaptic
search for rtl8812au-dkms and install
$ sudo service network-manager stop
$ sudo modprobe -rfv 8812au
$ sudo modprobe -v 8812au
$ sudo service network-manager start
$ sudo dkms status
rtl8812au, 4.3.8.12175.20140902+dfsg, 4.4.0-31-generic, x86_64: installed

Everytime kernel is updated, do these steps to recompile:
$ dkms status
$ sudo dkms remove rtl8812au/4.3.8.12175.20140902+dfsg -k $(uname -r)
$ sudo dkms install rtl8812au/4.3.8.12175.20140902+dfsg
$ sudo modprobe 8812au

5.24.2016

Cron Like Program in Ruby with Clockwork

I like this gem I discovered called clockwork. A good replacement for cron plus more because you can do extra programming when scheduling jobs.
require 'clockwork'
module Clockwork
  handler do |job|
    puts "Running #{job}"
  end 

  # handler receives the time when job is prepared to run in the 2nd argument
  # handler do |job, time|
  #   puts "Running #{job}, at #{time}"
  # end

  every(1.hour, 'hourly.ms_status') do
    `cd $HOME/Documents/projects/monweb_management && $HOME/.rvm/wrappers/ruby-2.3.0@RailsDev/rake file:get_ms_status`
  end 

  every(1.day, 'daily.pbba_query', :at => '07:00') do
    `cd $HOME/Documents/projects/monweb_management && $HOME/.rvm/wrappers/ruby-2.3.0@RailsDev/rake file:get_pbba_sizing`
  end 
end
Then run it like this...
$ clockwork job_scheduler.rb
I, [2016-05-24T09:18:21.215569 #771]  INFO -- : Starting clock for 2 events: [ hourly.ms_status daily.pbba_query ]
I, [2016-05-24T09:18:21.215729 #771]  INFO -- : Triggering 'hourly.ms_status'

2.03.2016

RVM Implode

My rvm installation and ruby got messed up after a change in home directory mounts at work. Looks like rvm had referenced everything back in my old home directory full path. The only way I found to easily fix this issue is to re-install rvm, ruby and all gems I need including rails.
1. Remove every trace of rvm
$ rvm implode
Are you SURE you wish for rvm to implode?
This will recursively remove /auto/home13/delacs/.rvm and other rvm traces?
(anything other than 'yes' will cancel) > yes
Removing rvm-shipped binaries (rvm-prompt, rvm, rvm-sudo rvm-shell and rvm-auto-ruby)
Removing rvm wrappers in /auto/home13/delacs/.rvm/bin
Hai! Removing /auto/home13/delacs/.rvm

Note you may need to manually remove /etc/rvmrc and ~/.rvmrc if they exist still.
Please check all .bashrc .bash_profile .profile and .zshrc for RVM source lines and delete or comment out if this was a Per-User installation.
Also make sure to remove `rvm` group if this was a system installation.
Finally it might help to relogin / restart if you want to have fresh environment (like for installing RVM again).

2. Now follow this post to reinstall everything.

1.24.2016

Capstone Cloud Computing References

1. Hadoop cluster setup on AWS - http://insightdataengineering.com/blog/hadoopdevops/
2. Cassandra Cluster setup on AWS - http://ealfonso.com/setting-up-a-cassandra-cluster-on-awsubuntu14-04/
3. Python with Hadoop Streaming
- http://www.glennklockwood.com/data-intensive/hadoop/streaming.html
- http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
4. Python with Cassandra - https://academy.datastax.com/demos/getting-started-apache-cassandra-and-python-part-i 5. Cassandra Tutorial - http://wiki.apache.org/cassandra/GettingStarted
namenode:~$ cd $HADOOP_HOME
namenode:/usr/local/hadoop$ bin/hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.7.1.jar -file /home/ubuntu/mapper.py -mapper /home/ubuntu/mapper.py -file /home/ubuntu/reducer.py -reducer /home/ubuntu/reducer.py -input /user/mobydick.txt -output /user/gutenberg-output
namenode:/usr/local/hadoop$ bin/hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.7.1.jar -D mapreduce.job.maps=4 -D mapreduce.job.reduces=1 -file /home/ubuntu/mapper.py    -mapper /home/ubuntu/mapper.py -file /home/ubuntu/reducer.py   -reducer /home/ubuntu/reducer.py -input /user/mobydick.txt -output /user/gutenberg-output

$ hdfs dfs -ls /user/gutenberg-output/
Found 5 items
-rw-r--r--   3 ubuntu supergroup          0 2016-01-26 07:53 /user/gutenberg-output/_SUCCESS
-rw-r--r--   3 ubuntu supergroup      91541 2016-01-26 07:53 /user/gutenberg-output/part-00000
-rw-r--r--   3 ubuntu supergroup      91157 2016-01-26 07:53 /user/gutenberg-output/part-00001
-rw-r--r--   3 ubuntu supergroup      91940 2016-01-26 07:53 /user/gutenberg-output/part-00002
-rw-r--r--   3 ubuntu supergroup      92025 2016-01-26 07:53 /user/gutenberg-output/part-00003

Delete output before starting a new job:
$ hdfs dfs -rm /user/gutenberg-output/*
$ hdfs dfs -rmdir /user/gutenberg-output/