Skip to content

2010

USB console cable for Cisco Catalyst

usb a mini b

Cisco’s new line of equipment now use a mini-USB cable for console configuration. There is no longer a need for a USB to serial adapter or a roll-over cable.

Connecting the USB cable on Linux should give you a new ACM device that looks something like this: “/dev/ttyACM0″.

To verify, you can also look through your dmesg or /var/log/messages :

[265430.720082] usb 4-1: new full speed USB device using uhci_hcd and address 4 [265430.914246] cdc_acm 4-1:1.0: This device cannot do calls on its own. It is not a modem. [265430.914305] cdc_acm 4-1:1.0: ttyACM0: USB ACM device

The easiest way to connect to an USB capable Cisco device is to use screen, however you can still use minicom.

Screen command: screen 9600 /dev/ttyACM0

The USB cable from Cisco is pin for pin the exact USB A to mini-B that you can find at your local electronics store.

Ubuntu 10.10 Maverick with 2.6.37 kernel

glxgears

If you want the latest (read bleeding edge) kernel that the Ubuntu developers are working, then have a look here:

Ubuntu Dev Kernels

Be sure to download the 3 files matching your architecture and install.

I am currently using 2.6.37-rc3-natty which will be the kernel that Natty Narwhal (11.04) will be based on. It just works, everything is detected and there is a noticeable increase in responsiveness.

Previously I posted about the 200~ kernel patch to increase desktop responsiveness. You do not need to wait for a new kernel or recompile one yourself to test the benefits. According to this Linux kernel mailing list: While the correct way is to implement it in kernel to be transparent to the user, you can enable it with a few quick entires in to your ~/.bashrc file. if [ "$PS1" ] ; then mkdir -m 0700 /sys/fs/cgroup/cpu/user/$$ echo $$ > /sys/fs/cgroup/cpu/user/$$/tasks fi

Then, as the superuser do this: mount -t cgroup cgroup /sys/fs/cgroup/cpu -o cpu mkdir -m 0777 /sys/fs/cgroup/cpu/user

"Done. Same effect. However: not crazy."

Thank you Lennart.

threading.Thread vs. multiprocessing.Process

The Feather or the Anvil?

First a bit of background: I was tasked with created a high level tester for my company's system. The idea is create 1 or more monkeys to pound away at the company's product for a very long time. A concurrent parallel programming project with the requirement that it needed to be compatible with 2.6.2 version of Python.

With threading, you get real posix threads (pthread) which works pretty well. They implicitly share state with the parent thread and do not use IPC or messaging. They have low latency and low overall resource footprint.

However there are drawbacks that made further development using threads a real problem. that is the use of signals. Such as threads not handling signals, working with the global interpreter lock (GIL, only one thread allowed to run at a time), and more.

This particular implementation of Python is used as a wrapper to binaries on the system, the benefit of understanding signals and passing them back to Python. The threading module simply does not like this:

failed to set child signal, error signal only works in main thread

According to the documentation:

Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), or pause(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads).

My hands are tied: I cannot upgrade Python, modify the execute() method being used nor can I trap the signal being sent to the thread by the execute().

There is one heavy handed solution and that is to use multiprocessing. It is almost a 1 to 1 replacement for the threading module, including the same API. However it has drawbacks in comparison to threads like: large resource footprint (big heavy process), processes do not share state and must use some form of message passing such as IPC to communicate.

If you can do this: Thread(target=func, args=(args,)).start() Then it is trivial to convert to: Process(target=func, args=(args,)).start()

There are benefits to the anvil approach however. Processes automatically run on multiple cores which helps make distributive systems easier, processes are safer to use as they do not share any state implicitly and they make high-throughput processing trivial. It has the additional benefit of not needing locks which means you get to side-step the GIL.

I managed to replace all instance with threading with multiprocesser and suddenly I am no longer in GIL hell nor having issues with handling signals in my child processes. The only downside is that we require more resources to run the same test and slower initial start-up due to process creation. No one ever said it was light weight.

xorg 1.9 versus fglrx

With the release of Ubuntu (Meerkat) 10.10 just 10 days away, the ubuntu x-swat people have been busy getting fglrx ready for release. The fglrx now compiles against latest Meerkat kernel and finally resolves the unknown symbol issue.

xswat-64

In the laptop is a ATI Mobility Radeon HD 3670 and with the latest fglrx driver usability is awful. There is horrible tearing when using "Appearance -> Visual Effects -> Normal", which had to be set back to "None" just to be usable. Even on "None", scrolling down in documents, chrome, firefox and Skype all give blurred or stuttered graphics.

These are for me "do not use" releases.

There are so far two updates. The first one gets fglrx up to date to latest Meerkat.

fglrx-installer (2:8.780-0ubuntu1) maverick; urgency=low * New upstream release. - Fix build issues with kernel fix for CVE-2010-3081 (LP: #642518). - Add compatibility with 2.6.35 kernels (LP: #573748). - Add compatibility with xserver 1.9 (LP: #630599). * Make the driver Depend on the appropriate xserver-xorg-video-$ABI (LP: #616215).

The latest update adds 2.6.36 kernel support:

fglrx-installer (2:8.780-0ubuntu2) maverick; urgency=low * debian/fglrx.postinst: - Call dpkg-trigger with "--by-package". * Add add-compatibility-with-2.6.36-kernels.patch: - Fix build issues with 2.6.36 kernels. * Add use-cflags_module-together-with-modflags.patch: - Fix build issues with kernels that don't have MODFLAGS and use CFLAGS_MODULE.

There will hopefully be more releases soon that help to resolve these issues

Ubuntu 10.10 Maverick Meerkat on Dell Studio XPS 1640

After a brief time with Karmic (10.04), the laptop was upgraded to the beta of Maverick. Everything works out of the box with no additional tricks necessary in getting the Dell Studio XPS 1640 fully functional.

ubuntu-maverick-meerakat

The only drawback I found in upgrading from Karmic was the loss of fglrx as a xorg-server driver. Maverick is shipping with the 1.9 version of xorg-server which is ABI incompatible with what fglrx is compiled against.

The exact error is: undefined symbol: savedScreenInfo which causes X not to start.

The rumour is that ATI will release a fglrx package that is compiled against 1.9 version of xorg-server when Maverick ships on Sunday, October 10th.

Compiling the fglrx against the latest kernel is also not a problem if you follow the advice found here.

This does not impact a good desktop experience with your "Visual Effects" set to "Normal". By default, Maverick ships with the latest radeon driver that xorg-server which is good enough to handle all those lovely desktop effects. I recommend this driver over fglrx because:

  • it is opensource
  • has acceptable 2d/3d performance
  • it is not a resource hog
  • the laptop will no longer burn your lap

I will continue testing against fglrx and I look forward to the final release.

Maverick Meerkat Release Schedule

UPDATE: If you have upgraded to Maverick and switched to the radeon driver after first having fglrx, you will need to purge your fglrx drivers. What happens is that X will start with the radeon driver but load your fglrx glx libraries and cause a segmentation fault or garbled screen.

To fix this: aptitude purge fglrx fglrx-amdcccle fglrx-modaliases xorg-driver-fglrx fglrx-dev

xdd is not "extreme dd"

While the ladies and gentlemen at IO Performance Inc. are busy with their 7.0 "Phoenix" release, they have left their website in disrepair including the disappearance of their product, source code and documentation.

phoenix

Since I make heavy use of xdd and apparently I am not the only one that needed the documentation, source and binary; I have provided it here for everyone.

xdd65.013007.tar xdd6.5 documentation

To explain what xdd is:

Xdd is a tool for measuring and characterizing disk subsystem I/O on single systems and clusters of systems. It is a command-line based tool that grew out of the UNIX world and has been ported to run in Window’s environments as well. It is designed to provide consistent and reproducible performance measurements of disk I/O traffic. There are three basic components to xdd that include the xdd program itself, a timeserver program, and a gettime program. The timeserver and gettime programs are used to synchronize the clocks of xdd programs simultaneously running across multiple computer systems.

Make your external screen your primary screen

Or how to move gnome panels to a different monitor in Ubuntu.

multiMonitor setup

In my particular setup, I have a laptop and a much larger external display. I like having my main Gnome panel on the external display and use the laptop display for other things like Skype and email.

To make this change perminant, I use some gConf magic:

#!/bin/bash gconftool-2 --set "/apps/panel/toplevels/top_panel_screen0/monitor" --type integer "1" gconftool-2 --set "/apps/panel/toplevels/bottom_panel_screen0/monitor" --type integer "1"

To reverse this, you can run this:

#!/bin/bash gconftool-2 --set "/apps/panel/toplevels/top_panel_screen0/monitor" --type integer "0" gconftool-2 --set "/apps/panel/toplevels/bottom_panel_screen0/monitor" --type integer "0"

The beautiful thing about this is that when on the go, Gnome will put the panels on your laptop display when the external display is not available.

Upgrading the CPU in a Dell Studio XPS 1640

While the default Dell Studio XPS 1640 is fast and is a decent work laptop, it needed to do one extra thing, run virtual machines (VMs). I spend a lot of my time debugging, hacking and otherwise trying to make software behave in ways that the developers had not intended.

Tools like VirtualBox, VMWare, Xen and QEMU make this job less tedious. The problem is that with the Intel T7350 CPU, I cannot run 64 bit VMs. Turns out that T7250 is a 64 capable chip, but without the VT-x extension which allows for hardware assisted virtualization.

Look your Intel CPU up to see if it supports the VT-x flag. It turns out upgrading your CPU is relatively easy as Dell did a good job in designing the laptop. The instructions are all in their online service manual.

Replacing CPU in a Dell Studio XPS 1640

The only hard part is figuring out what CPU is compatible and then buying it. Thankfully my company had no problem ordering an upgrade and after the first successful attempt, they ordered more for my other colleagues as well.

When picking an upgrade CPU, you must find one your motherboard supports. Your best and most compatible guess is to choose one in the same family as your own CPU, essentially a later model. Since the T7350 has a 1066 front side bus (FSB) , 45nm process and takes 25W, it is then easy to find a similar CPU with those values but at a much higher clock rate and L2 cache, and most importantly, VT-x flag.

Wikipedia has a list of all mobile processors that have the VT-x flag.

  • Mobile Core 2 Duo T5500, T5600, T6670, T7100, T7200, T7250, T7300, T7400, T7500, T7600G, T7700, T7800, U7500, L7200, L7300, L7400, L7500, L7700, U7500, U7600, U7700 (Merom)
  • Mobile Core 2 Duo SU7300, SU9300, SU9400, SU9600, SL9300, SL9380, SL9400, SL9600, SP9300, SP9400, SP9600, P7350 (mac),P7370, P7550 (confirmed), P7570, P8400, P8600, P8700, P8800, P9500, P9600, P9700, T8100, T8300, T9300, T9400, T9500, T9550, T9600, T9800, T9900 (Penryn)

A successor of the T7350 (Merom-2M) is the P8700 (Penryn-3M) and fits all my requirements to fit in the laptop. It is faster and more importantly has the VT-x flag and is easily found online for very cheap.

The installation went without incident and upon reboot I entered the BIOS and enabled the "Hardware Virtualization" option. Saved and rebooted, Virtualbox finally gave me the option to create and run 64-bit VMs.

An additional note: Updating (Flashing) the BIOS is not necessary. The BIOS checks to see if the CPU has the flag or not and will not display the "Hardware Virtualization" option if the CPU does not support it.

Ubuntu Karmic 9.10 on Dell Studio XPS 1640

The very first thing I did when the company I work for gave me a laptop, a Dell Studio XPS 1640, was to install Ubuntu Karmic on it. No need for windows on this thing, I plan on working, not playing.

karmic 9.10

Dell is very good about their laptops. Whenever I had a question, everything I ever needed to know about their hardware I could easily look up on the online. The 1640 is no exception with their manual. The setup:

  • Intel Core2 Duo: T7350
  • 4GB of DDR3 ram
  • Radeon Mobility HD 3670
  • RV635 Audio device
  • Intel Corporation Wireless WiFi Link 5100
  • BCM5784M Gigabit Ethernet
  • Intel 82801I (ICH9 Family) Chipset
  • Intel ICH9M/M-E SATA AHCI Controller
  • WesternDigital WD3200BJKT-7 320GB Harddriver
  • TSSTcorp DVD+/-RW TS-T633A, D600

Key Elements:

  • Ubuntu Karmic 9.10
  • typical development setup (build-essentials)
  • Skype
  • fglrx
  • Virtualbox
  • ext4 filesystem

The install happened in September 2009, but I felt it best to share what I went through before going to Ubuntu Lucid 10.04. The install itself went without a problem, everything loaded up fine. At the time, I was using the 'ati' xorg driver. I attached a separate monitor (Dell 23") and configured it using the Display Preferences tool to be one big desktop. This gets a little weird with one monitor being 1920x1080 and the laptop LCD at 1366x768 but what happens is that it creates a virtual desktop size that covers both screens with a little cut-off at the bottom (or top) of the laptop LCD screen. I wanted to switch over to the fglrx driver for it's 3D acceleration but immediately came into a problem in that my screen setup above would just not work. I hunted through forums and documentation but eventually with some guess work, I managed to get the big virtual desktop setup and fglrx to work together. Here is my modified: /etc/X11/xorg.conf

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "DELL S2309W"
    HorizSync       30.0 - 83.0
    VertRefresh     50.0 - 76.0
    Option         "DPMS"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "LCD"
    HorizSync       30.0 - 83.0
    VertRefresh     50.0 - 76.0
    Option         "DPMS"
EndSection

Section "Screen"
    Identifier  "Configured Screen Device"
    Device  "Configured Video Device"
    DefaultDepth    24
    SubSection "Display"
        Virtual 3840 2160
    EndSubSection
EndSection

Section "Module"
    Load    "glx"
EndSection

Section "Device"
    Identifier  "Configured Video Device"
    Driver  "fglrx"
    Option "AccelMethod" "exa"
EndSection

Section "ServerFlags"
    Option "RandR" "on"
EndSection>

The benefit of this setup is that I was then able to turn on compiz for a compositing windows manager. I use normal, I do not need the flashy effects. System->Preferences->Appearance->Visual Effects->Normal It worked seamlessly across the two screens and the openGL programs worked flawlessly. So there you are, everything worked out of the box on the laptop except for the ability to have an additional monitor. Even that was fixed with a bit of hacking. Aside from it's looks, it is a very nice desktop replacement. Everything just simply works.

vNES for J2ME on your mobile

If you have ever wanted to play a NES game on your mobile, then you might have heard about vNES. There is a J2ME version which allows it to run on most mobiles available today. There are however a few rough edges to this application as it requires assembling the necessary files together and running a windows batch file. From a Linux point of view, I have created a replacement shell script that does a better job. It requires unix2dos, rename, and bash.

Just place this file into your vNes directory: makejar.sh

#!/bin/bash
cd roms
#uppercase all our roms
rename 'y/a-z/A-Z/' *.nes
#remove all spaces, shorten name
rename "s/ *//g" *.NES
#rename roms to png
rename 's/\.NES$/\.png/' *.NES
#create our list of roms
package=""
for file in `dir -A` ; do
    package="${package}${file%.png}\
"
done
echo -e $package > ../package.txt
cp *.png ../package
cd ..
#make sure list is with windows line breaks
unix2dos package.txt
cp package.txt package/2.txt
cd package
#package everything up
zip -r -9 ../vnes.jar *
cd ..

Be sure to set: chmod +x makejar.sh

General advice:

The NES rom files must be in iNES format, and also be renamed from *.rom to *.png and copied into "package" directory. There is a file in "package" called 2.txt which is a list of all the roms you have. Rom names should be in capital letters, with .png at the end, not .rom nor .nes. Example: castlevania_3.nes --> CASTLE3.png and in 2.txt add a new line with 'CASTLE3' without the '.

Here are a list of possible problems:

This usually means that something is not right with 2.txt, make sure this is an extra line at the end of list. Use a space if you have to and remember use Windows based EOL 'end of line' characters. It helps to use 'unix2dos' to convert your \ to ^M.

You get the "application is invalid" error message:

Delete the line "MIDlet-Data-Size: 1024000" in the file "package\META-INF\MANIFEST.mf", this line tells the mobile how big the java application is. Problem is some mobiles just do not agree with that size and as a result will refuse to run a slightly dodgy application. It is best just to remove the line for maximum compatibility.

When loading a rom you get "java.lang.NullPointerException" error message:

This is because the rom file either does not exist in the package dir, the name is spelled incorrectly in the 2.txt file, or that there are wrong line endings. Verify that the files exist as *.png in the 'package' directory and to be safe, run unix2dos or todos on 2.txt file make sure the line endings are correct. The application expects dos ^M line endings.