I really couldn’t be happier about Getty Images’ addition to the non-commercial “free to use” stock photo arsenal.
Before we start – this tutorial assumes you’re using an Ubuntu Server, and you’re OK with removing your existing unixODBC driver manager and any problems that come with that.
OK – on to the goods.
- Remove any previous unixODBC packages – take note of any additional packages APT wants to remove so that you can reconfigure/reinstall/fix them later:
$ sudo apt-get remove libodbc1 unixodbc unixodbc-dev
- (Optional – only necessary if you don’t use my .deb package) Get your system ready to compile software if you don’t already have make and gcc installed:
$ sudo apt-get install build-essential
Now you have three choices – download, configure, and compile yourself, use my modified version of Microsoft’s “build_dm” script they offer with the SQL Server ODBC Driver for Linux, or use the unixodbc_2.3.2-1_amd64 Ubuntu 12.04 LTS package I built.
Personally – I’d choose the package as any other packages that depend on unixodbc or libodbc should easily install and be able to use our custom unixODBC to fulfill any package requirements.
Ubuntu deb package method:
- Get the package:
$ wget http://onefinepub.com/wp-content/uploads/2014/03/unixodbc_2.3.2-1_amd64.deb
- Install the package:
$ sudo dpkg -i unixodbc_2.3.2-1_amd64.deb
Automated script method:
- Get the automated build_dm.sh script here or use this command:
$ wget https://raw.github.com/Andrewpk/Microsoft--SQL-Server--ODBC-Driver-1.0-for-Linux-Fixed-Install-Scripts/master/build_dm.sh
- Make sure it’s executable and then run it:
$ chmod u+x build_dm.sh; sudo ./build_dm.sh --libdir=/usr/lib/x86_64-linux-gnu
- After it’s finished, the script will give you a /tmp/unixODBC.RANDOMNUMBERS directory which it tells you to change to, and then ‘make install’. An example of the command I ran is below – replace the XXXX’s with the exact path the script gave you upon it finishing:
$ sudo su -c 'cd /tmp/unixODBC.XXXX.XXXX.XXXX/unixODBC-2.3.2; make install'
That’s it – unixODBC was automatically configured with some options the Microsoft ODBC driver recommends and the make target “install” was executed.
Do it yourself method:
- Download unixODBC
$ wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.2.tar.gz
- Ungzip and untar the gzipped tarball – this example uses a modern gnu tar:
$ tar -zxvf unixODBC-2.3.2.tar.gz
- Change to the new directory that has been created:
$ cd unixODBC-2.3.2
- Configure with any custom options you want – this is an example for Ubuntu 64-bit using the recommendations provided by the Microsoft ODBC driver for server installations (note: if you’re installing on a headless server, you may want to add “–enable-stats=no” to increase performance):
$ ./configure --enable-gui=no --enable-drivers=no --enable-iconv --with-iconv-char-enc=UTF8 --with-iconv-ucode-enc=UTF16LE --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --sysconfdir=/etc
- Make the install target with root privileges:
$ sudo make install
On Debian-based systems, aptitude can be quite useful for searching and displaying information about packages.
Aptitude does include a ncurses interface, but you don’t ever have to use it. Need to get a list of installed packages that have the word “python” somewhere in their package name or description?
$ aptitude search '~ipython3.3'
i python3.3 - Interactive high-level object-oriented language (version 3.3)
All of those letters that prefix the package names are interesting though. “p” and “i” are easy enough to figure out by process of elimination for a new user – but this short list will help you with the rest of them:
These are the values of the “current state” flag – the first flag before the package name:
i – the package is installed and all its dependencies are satisfied.
c – the package was removed, but its configuration files are still present.
p – the package and all its configuration files were removed, or the package was never installed.
v – the package is virtual.
B – the package has broken dependencies.
u – the package has been unpacked but not configured.
C – half-configured: the package’s configuration was interrupted.
H – half-installed: the package’s installation was interrupted.
These are the values of the “action” flag – the second flag before a package name (if there is none – no action is to be performed on that package):
i – the package will be installed.
u – the package will be upgraded.
d – the package will be deleted: it will be removed, but its configuration files will remain on the system.
p – the package will be purged: it and its configuration files will be removed.
h – the package will be held back: it will be kept at its current version, even if a newer version becomes available, until the hold is cancelled.
F – An upgrade of the package has been forbidden.
r – the package will be reinstalled.
B – the package is “broken”: some of its dependencies will not be satisfied. aptitude will not allow you to install, remove, or upgrade anything while you have broken packages.
For more aptitude reading, such as information on the regular expression patterns and searches you can do (like my
aptitude search '~ipython3.3') you can check out the aptitude user’s manual and for even more information on managing packages within the debian ecosystem there’s the debian manual on package management
The title says it all with this post: PHP segfaults with pdo_odbc on 64-bit platforms when using a query that has bound parameters (named, indexed/placeholder, bindParam(), and bindValue() in any combination).
I’ve submitted a pull request (which fails its Travis build due to 5.5.9 failing its Travis build) to keep this in the minds of the php maintainers as it’s a pretty severe problem for people using php in a more “corporate” environment (where postgres and mariadb/mysql aren’t as pervasive).
With our millions of records stored in MS SQL and iSeries DB2 UDB databases at my current employer – this is a huge problem. We’re basically confined to 32-bit environments unless we want to pay for an additional method to connect to the iSeries (IBM DB2 Connect) and even then we’d be reliant on the MS SQL ‘sqlsrv’ php driver which I’ve found to be incredibly slow with medium-sized or larger data sets.
This hasn’t been a huge problem yet for most people using Windows since IIS’ fastcgi support seems to be only 32-bit currently, but with the way Azure has been getting pushed and adopted I would assume that a demand for 64-bit fastcgi apps on Azure will be approaching soon.
While bugs for this issue have been outstanding for quite some time, I’ve compiled a version of pdo_odbc as a shared extension with the patches people have agreed upon. After taking a look at the history of pdo_odbc – my shared extension may work with php versions as far back as the last stable release of the 5.3 branch and has been compiled on Ubuntu 13.10 x64 (so it should work on most 64-bit Ubuntu/Debian derivatives that have glibc 2.14+) against the php 5.5.9 stable source.
The extension is relatively simple to toss in to your php installation – but use it at your own risk. I’ll try to remember to keep it updated – but hopefully this will just get fixed upstream.
Here’s a link to the php 5.5 (5.5.9 to be precise) 64-bit patched pdo_odbc shared extension compiled on Ubuntu (Ubuntu 13.10 – but should work on most modern Ubuntu/Debian variants without any problems).
On non-Ubuntu/Debian platforms, you may get an error like the following:
"PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo_odbc.so' - libodbc.so.1: cannot open shared object file: No such file or directory"
You’ll probably need to create some symlinks.
If you get an error about glibc – it’s because I’ve compiled this against glibc 2.14 initially. This is a problem for both Ubuntu 12.04 LTS and CentOS, so I’ll likely be recompiling this against glibc 2.12 in the very near future.
I have been somewhat vague with this post on purpose compared to my normal “tutorial” type posts I do, due the technical nature of the problems described here. You should not be following a tutorial or step-by-step procedure without fully understanding the steps you’re executing when it comes to hacking extensions/patches into your programming language runtime – unexpected results may occur, which is why I’ve posted my compiled extension AS-IS with a “Use at your own risk” disclaimer.
For those of us still using PHP:
Oh, but your favorite framework/library/ORM doesn’t support OPcache because it no longer has userland caching functionality? Well, now we’re in quite a predicament; PHP upgraded, new OPcode caching via OPcache, and yet this setup feels like we’ve lost something. Don’t worry, you don’t have to run a memcache server and install a memcache extension just for userland caching - there’s APCu.
APCu is a fully functioning userland caching implementation that can be overly simplified with this equation:
The old APC extension – OPcache’s OPcode caching = APCu
If you have a lot of code that does detection of the old APC extension, you can even enable the compile flag
--with-apc-bc to enable full APC compatibility mode.
Now you have no excuse – upgrade to PHP 5.5+.
When I originally wrote the how to, I was frustrated with Microsoft’s PDO driver being so. Incredibly.
In my tests, I was using a table with a few hundred thousand rows. This table had up-to-date indexes that were relatively useful and fast.
As a starting point, I tested the query on SQL Management Studio and I could return a few thousand rows in 0.1 seconds on our test server.
For the second test, I used pdo_sqlsrv (the Microsoft provided PHP driver for SQL Server) to do the same query, and it would take around 20-30 seconds.
As a final test, I accessed the database via pdo_odbc (with the installed Microsoft SQL ODBC drivers), and the same query that had just taken 20-30 seconds returned just as fast as SQL Management Studio. PDO_SQLSRV was far too slow and we needed to move to something faster.
Here are some items (that I wish someone had told me when I was starting down this path) for you to consider when making your choice between pdo_odbc (using Microsoft SQL Server ODBC drivers), PDO_SQLSRV (Microsoft’s native SQL Server driver for php), or using the open-source freetds driver:
- When choosing the odbc route – you lose a lot of the data type inference casting that the SQL Server driver for your programming language (such as pdo_sqlsrv) will do for you automatically, such as taking a date in the format of “03/01/2013″ (March 1st, 2013) and inserting into a ‘date’ column as ’2013-03-01′. This is no big deal if you start out knowing that these features aren’t there, but for porting code – it would’ve been nice to know right off the bat.
- Need to connect to SQL Server older than SQL Server 2005? Forget about using ODBC (pdo_odbc in my case) method; it will continually complain and give you all sorts of fun errors that will be generally meaningless, yet when you switch to pdo_sqlsrv – it will instantly work. Awesome right? Your other choice in this situation is a freetds connection via odbc; this also has worked well for me and I will try to put up a tutorial on getting that up and running easily in the near future.
- Do you only need to run a few queries here and there or will you be returning less than 1000 results from the database per query? You should probably just use the driver that comes with your programming language (pdo_sqlsrv in my case) as it’ll be the most painless and the easiest to get started with.
- Do you have a lot of code that is currently accepting strange formats for date (including slashes?) or do you have a lot of old stored procedures? You’ll probably want to go with (in order of preference): your language’s driver (pdo_sqlsrv in my case), freetds, and lastly the odbc route. Why is odbc last in this case? Because all your code, including stored procedures, will need to be modified in any places that you’re depending on implicit type casting that SQL Server or the current driver is doing for you. The odbc route also causes all input parameters to be accepted in as the C type ‘SQL_C_WCHAR’ more or less, which will only be implicitly cast to one of three SQL character types. If you go the odbc route, you’ll likely have to perform quit a few
CAST(CAST(SOMETHING AS VARCHAR) AS SOMEOTHERTYPE)operations.
So – if you’ve gotten this far and still want to read the rest of my original post – here it is (also, this is the ODBC C Datatype to SQL DATA TYPE conversion chart - you’ll likely need this if you go the ODBC route).
I understand most might think this is a ridiculous endeavor; having good inexpensive and easy to make coffee. Hell, with everyone citing places like Tonx, Sweet Maria’s, or King Marco himself, you would assume you either need to spend an arm and a leg, or go through quite an arduous process with lots of expensive equipment (you can’t convince me that a cheap stove-top roaster will suffice) in order to enjoy a decent cup of coffee.
I know this is internet hell.
This is where websites go to die as the internet coffee-nerdgasm devours all in its path.
I don’t care.
I just want to share some thoughts.
After having freshly roasted coffee prepared properly for me at a few small coffee shops, I’ve found the following as my best in class method for decent coffee (90+ percent of what I could get by going to a small independent coffee shop that sources & roasts their own beans) for a small fraction of the cost.
- Learn how to make french press coffee. Yep – this entire post is about french press coffee. In my opinion, for the best “cheap” cup of coffee – this is the way to go. If you want to purchase a great bean mill, you could easily switch to the aeropress method for a fantastic cup, but a french press seems to be a bit more forgiving as far as the mill is concerned.
- Find a few local roasters. (In Michigan my favorite is Chazzano coffee.)
- Try a few of their roasts to find one that you like if you don’t already have a favorite roast (hint: visit the roaster if you can). (I’m a fan of most beans roasted from “American” to “Full City”.)
- Buy the beans whole in small amounts. I like to buy small enough batches that I can use them within a week or so and not even have to be concerned about long term storage like one might with a giant sack of beans from Costco. I typically keep them in the vacuum bags they come in and just squeeze any air out of the valve after each opening.
- Buy a burr grinder. People will tell you that you’ll need to spend at least $130 on a good burr grinder, and this might be true if you’re looking to make aeropress or espresso. I’ve also seen burr grinders like this one go for around $50 on sale with a “retail” price of $80+ while it’s really no better than this model that you can score on sale with coupons, and all the other ridiculous things retail shops make you do if you want to save money, for around $20. For this exercise in frugality, a cheap burr grinder will do just fine.
- Test the grind setting. Most cheap burr grinders produce grinds vastly different then they’re stated grind level. The grinder I own is at least two numerical levels away from the stated grind level, and does not produce a coarse enough grind for some beans/roasts/preparation methods.
- Buy a french press. Bodum is the defacto standard, but you can get a decent press from Ikea as well (around $15 if I’m not mistaken).
And that’s it. For less than half the price of what it would cost you to have a decent aeropress setup – you can make french press coffee (at your desk) and spare yourself the hell of drinking crap drip coffee with grinds from prepackaged pouches that have burned in the carafe hours before you’ve arrived by drinking fresh delicious coffee at your desk. My whole setup cost $40, subtracting the cost of the “Ben and Patty Show” mug that I use at work.
No need for subscriptions and waiting for the mail or expensive roasting equipment and the procedures that go along with it. Granted, this entire method requires a local roaster that roasts beans you enjoy – but if you can find one, you can actually talk to them. You can tell a local roaster things you like about coffee, or cups you’ve enjoyed and get suggestions on bean buying from them. You can also get beans a day or two after they’ve been roasted to enjoy the beans at their peak flavor potential. Also, the local roasters in my area sell freshly roasted coffee (with roasted date written on the label) at a few grocery stores – so if you’re in a hurry you don’t even have to visit their retail location. There is an additional cost for the roasted beans in the quantities I buy over buying unroasted “green” coffee – roughly $4 per unit.
For me, the extra $4-$5 without the need for expensive equipment, time consuming/messy activities (that could be spent on development) is money well spent.
And no, K-Cups are never the answer. Never.
So ruby is kind of like perl, in that sometimes there’s output that can be buffered by ruby internally before it’s returned (see: sync).
$| = 1; to disable output buffering, or rather, enable “autoflush” (see: perlvar – just use find to locate autoflush).
Cocoapods (the fantastic dependency manager for Objective-C) makes extensive use of ruby, so when trying to capture output they’ve included a helper in their pod script:
STDOUT.sync = true if ENV['CP_STDOUT_SYNC'] == 'TRUE'
So if you want to capture any output from cocoapods, just make sure you set that environment variable. In the shell, that’d look like this:
After that’s set, feel free to capture any output from cocoapods, such as capturing cocoapods’ version in bash, until your heart is content. Here’s a quick example of just that:
echo $test ### $test should have captured output now
There will not be a day two to my Zerowater Pitcher Day 1 post…
Plastic taste in mouth == return to store.
Our office has terrible tap water. The kind of water that leaves prominent white solids around the inside of your coffee mug if left to sit with coffee or water overnight.
This Zerowater pitcher is my first trial at combating the terrible taste and TSS in our water. Unfortunately, as of day 1 – after rinsing and washing all of the pitcher’s components and letting at least 3 pitchers of water filter through, I’m left with plastic-tasting water. This taste is reminiscent of an Aquafina or Dasani bottle (heavier grade plastic 20 oz. bottle) left in the sun for a few hours.
So while the TSS in this water may be zero – the taste is off. As of day one this Zerowater pitcher filtration system gets a huge thumbs down.