Tag Archives: Linux

Fix for Chromium Network Location Provider Returning Error Code 403

I had been using Chromium and wondered why it kept returning error code 403 and the message that’s in the title of this post when using the html5 geolocation features. It turns out that Chromium does not ship with Google’s API credentials as the normal build of Google Chrome does so those services are unavailable (Chrome/Chromium does not use the operating system’s built in location services and instead relies on Google’s API). After reviewing the Chromium documentation here, I’ve come up with the following steps to properly enable Chromium’s google services. It’s a lot of work, but I hope this is useful for the many people using Google’s open source Chromium on Linux, OS X, or Windows.

  1. Join the chromium dev group here by subscribing. You don’t have to receive email updates, you just have to be a member of the group in order for the right APIs to show up in the developer console.
  2. Visit the Google API Console and create a new project.
  3. Visit your project’s page in the console, click the APIs link in the left menu, and begin subscribing to developer APIs of your choice. In order to resolve network location provider issues when using Chromium, you’ll need the “Google Maps Geolocation API”. The Chromium documentation notes the following as useful APIs for Chromium:
    • Chrome Remote Desktop API
    • Chrome Spelling API
    • Chrome Suggest API
    • Chrome Sync API
    • Chrome Translate Element
    • Google Maps Geolocation API – (requires enabling billing but is free to use; you can skip this one, in which case geolocation features of Chrome will not work)
    • Safe Browsing API
    • Speech API
    • Time Zone API
    • Google Cloud Messaging for Chrome
    • Drive API (Optional, enable this for Files.app on Chrome OS and SyncFileSystem API)
    • Google Now For Chrome API (Optional, enabled to show Google Now cards)
  4. Click the settings gear after enabling the APIs of your choice and choose “Project Billing Settings”.
  5. Click “Enable Billing”, choose a personal billing account, and enter your billing information. Yes, in order for Google Maps Geolocation API calls to work, you have to have a payment method on your account. Having a payment method tied to your account won’t affect the fact that the quota for API calls to the Geolocation API is 100 calls per day and 100 calls are billed at $0.0 for personal accounts. If you’re still worried, check out Google’s documentation on Geolocation pricing here.
  6. Click the “Credentials” link in the left menu under “APIs & auth” on the Google API Console.
  7. Click “create new key”, then click “server key”, then click “create”. This is your “GOOGLE_API_KEY” which you’ll need later.
  8. Under “OAuth”, click “Create new Client ID”, choose “Installed application” and click “Configure consent screen”. Fill in the required information in the form and click “save”.
  9. Choose “Installed application” again, and click “Create ClientID”. Now you have your GOOGLE_DEFAULT_CLIENT_ID and GOOGLE_DEFAULT_CLIENT_SECRET which you’ll need later.

Now you’ll need to setup some environment variables for Chromium to pick up when it’s launched, the rest of these instructions are specifically for OS X using launchd, though with a little bit of googling it should not be that difficult for you to find a solution that works with your OS’ service/startup/daemon manager:

  1. Create a new script in your home directory, mine is named ‘.setGoogleEnvVars.sh’
  2. Add the following to the script, replacing the XXXs with appropriate values from your Google API developer console:
    launchctl setenv GOOGLE_API_KEY XXX
    launchctl setenv GOOGLE_DEFAULT_CLIENT_ID XXX
    launchctl setenv GOOGLE_DEFAULT_CLIENT_SECRET XXX
    
  3. Create a new launchd service in your home’s LaunchAgents directory, ~/Library/LaunchAgents/, mine is called local.setGoogleEnvVars.plist with the following contents, replacing the label and program argument of “~/.setGoogleEnvVars.sh” if necessary:
    <?xml version="1.0" encoding="UTF-8"?>                                                                                  
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
      <key>Label</key>
      <string>local.setGoogleEnvVars</string>
      <key>ProgramArguments</key>
      <array>
        <string>sh</string>
        <string>-c</string>
        <string>~/.setGoogleEnvVars.sh</string>
      </array>
      <key>RunAtLoad</key>
      <true/>
    </dict>
    </plist>
    
  4. Make sure your startup script is executable by using the terminal and chmod +x to set the executable bit like this, replacing the script name if necessary:
    chmod +x ~/.setGoogleEnvVars.sh
  5. At this point, you can either restart your computer or load the service with launchctl load ~/Library/LaunchAgents/local.setGoogleEnvVars.plist, replacing the plist name if necessary.

    That’s it! It’s a lot of work, but you’ve now enabled any of your selected Google APIs in Chromium, and you should no longer receive error messages like network location provider at 'https://www.googleapis.com/' : returned error code 403. code 2 if you’ve chosen to enable the Geolocation API and billing.

    Installing unixODBC 2.3.2 and higher on Ubuntu 12.04 LTS

    Before we start – this tutorial assumes you’re using an Ubuntu Server, and you’re OK with removing your existing unixODBC driver manager and any problems that come with that.

    OK – on to the goods.

    1. Remove any previous unixODBC packages – take note of any additional packages APT wants to remove so that you can reconfigure/reinstall/fix them later:
      $ sudo apt-get remove libodbc1 unixodbc unixodbc-dev
    2. (Optional – only necessary if you don’t use my .deb package) Get your system ready to compile software if you don’t already have make and gcc installed:
      $ sudo apt-get install build-essential

    Now you have three choices – download, configure, and compile yourself, use my modified version of Microsoft’s “build_dm” script they offer with the SQL Server ODBC Driver for Linux, or use the unixodbc_2.3.2-1_amd64 Ubuntu 12.04 LTS package I built.

    Personally – I’d choose the package as any other packages that depend on unixodbc or libodbc should easily install and be able to use our custom unixODBC to fulfill any package requirements.

    Ubuntu deb package method:

    1. Get the package:
      $ wget http://onefinepub.com/wp-content/uploads/2014/03/unixodbc_2.3.2-1_amd64.deb
    2. Install the package:
      $ sudo dpkg -i unixodbc_2.3.2-1_amd64.deb

    Automated script method:

    1. Get the automated build_dm.sh script here or use this command:
      $ wget https://raw.github.com/Andrewpk/Microsoft--SQL-Server--ODBC-Driver-1.0-for-Linux-Fixed-Install-Scripts/master/build_dm.sh
    2. Make sure it’s executable and then run it:
      $ chmod u+x build_dm.sh; sudo ./build_dm.sh --libdir=/usr/lib/x86_64-linux-gnu
    3. After it’s finished, the script will give you a /tmp/unixODBC.RANDOMNUMBERS directory which it tells you to change to, and then ‘make install’. An example of the command I ran is below – replace the XXXX’s with the exact path the script gave you upon it finishing:
      $ sudo su -c 'cd /tmp/unixODBC.XXXX.XXXX.XXXX/unixODBC-2.3.2; make install'

    That’s it – unixODBC was automatically configured with some options the Microsoft ODBC driver recommends and the make target “install” was executed.

    Do it yourself method:

    1. Download unixODBC

      $ wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.2.tar.gz
    2. Ungzip and untar the gzipped tarball – this example uses a modern gnu tar:
      $ tar -zxvf unixODBC-2.3.2.tar.gz
    3. Change to the new directory that has been created:
      $ cd unixODBC-2.3.2
    4. Configure with any custom options you want – this is an example for Ubuntu 64-bit using the recommendations provided by the Microsoft ODBC driver for server installations (note: if you’re installing on a headless server, you may want to add “–enable-stats=no” to increase performance):
      $ ./configure --enable-gui=no --enable-drivers=no --enable-iconv --with-iconv-char-enc=UTF8 --with-iconv-ucode-enc=UTF16LE --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --sysconfdir=/etc
    5. Make the install target with root privileges:
      $ sudo make install

    Your package Aptitude – flags and searching

    On Debian-based systems, aptitude can be quite useful for searching and displaying information about packages.

    Aptitude does include a ncurses interface, but you don’t ever have to use it. Need to get a list of installed packages that have the word “python” somewhere in their package name or description?

    $ aptitude search '~ipython3.3'
    i   python3.3 - Interactive high-level object-oriented language (version 3.3)

    All of those letters that prefix the package names are interesting though. “p” and “i” are easy enough to figure out by process of elimination for a new user – but this short list will help you with the rest of them:

    These are the values of the “current state” flag – the first flag before the package name:

    i – the package is installed and all its dependencies are satisfied.
    c – the package was removed, but its configuration files are still present.
    p – the package and all its configuration files were removed, or the package was never installed.
    v – the package is virtual.
    B – the package has broken dependencies.
    u – the package has been unpacked but not configured.
    C – half-configured: the package’s configuration was interrupted.
    H – half-installed: the package’s installation was interrupted.

    These are the values of the “action” flag – the second flag before a package name (if there is none – no action is to be performed on that package):

    i – the package will be installed.
    u – the package will be upgraded.
    d – the package will be deleted: it will be removed, but its configuration files will remain on the system.
    p – the package will be purged: it and its configuration files will be removed.
    h – the package will be held back: it will be kept at its current version, even if a newer version becomes available, until the hold is cancelled.
    F – An upgrade of the package has been forbidden.
    r – the package will be reinstalled.
    B – the package is “broken”: some of its dependencies will not be satisfied. aptitude will not allow you to install, remove, or upgrade anything while you have broken packages.

    For more aptitude reading, such as information on the regular expression patterns and searches you can do (like my aptitude search '~ipython3.3') you can check out the aptitude user’s manual and for even more information on managing packages within the debian ecosystem there’s the debian manual on package management

    PHP segfaults with pdo_odbc and bound parameters on 64-bit platforms

    The title says it all with this post: PHP segfaults with pdo_odbc on 64-bit platforms when using a query that has bound parameters (named, indexed/placeholder, bindParam(), and bindValue() in any combination).

    I’ve submitted a pull request (which fails its Travis build due to 5.5.9 failing its Travis build) to keep this in the minds of the php maintainers as it’s a pretty severe problem for people using php in a more “corporate” environment (where postgres and mariadb/mysql aren’t as pervasive).

    With our millions of records stored in MS SQL and iSeries DB2 UDB databases at my current employer – this is a huge problem. We’re basically confined to 32-bit environments unless we want to pay for an additional method to connect to the iSeries (IBM DB2 Connect) and even then we’d be reliant on the MS SQL ‘sqlsrv’ php driver which I’ve found to be incredibly slow with medium-sized or larger data sets.

    This hasn’t been a huge problem yet for most people using Windows since IIS’ fastcgi support seems to be only 32-bit currently, but with the way Azure has been getting pushed and adopted I would assume that a demand for 64-bit fastcgi apps on Azure will be approaching soon.

    php 5.5 32-bit on Azure x64

    PHP 5.5 is pre-installed as 32-bit on a Microsoft Azure 64-bit “Standard” scaled “Website”.

    While bugs for this issue have been outstanding for quite some time, I’ve compiled a version of pdo_odbc as a shared extension with the patches people have agreed upon. After taking a look at the history of pdo_odbc – my shared extension may work with php versions as far back as the last stable release of the 5.3 branch and has been compiled on Ubuntu 13.10 x64 (so it should work on most 64-bit Ubuntu/Debian derivatives that have glibc 2.14+) against the php 5.5.9 stable source.

    The extension is relatively simple to toss in to your php installation – but use it at your own risk. I’ll try to remember to keep it updated – but hopefully this will just get fixed upstream.

    Here’s a link to the php 5.5 (5.5.9 to be precise) 64-bit patched pdo_odbc shared extension compiled on Ubuntu (Ubuntu 13.10 – but should work on most modern Ubuntu/Debian variants without any problems).

    On non-Ubuntu/Debian platforms, you may get an error like the following:

    "PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/pdo_odbc.so' - libodbc.so.1: cannot open shared object file: No such file or directory"

    You’ll probably need to create some symlinks.

    If you get an error about glibc – it’s because I’ve compiled this against glibc 2.14 initially. This is a problem for both Ubuntu 12.04 LTS and CentOS, so I’ll likely be recompiling this against glibc 2.12 in the very near future.

    I have been somewhat vague with this post on purpose compared to my normal “tutorial” type posts I do, due the technical nature of the problems described here. You should not be following a tutorial or step-by-step procedure without fully understanding the steps you’re executing when it comes to hacking extensions/patches into your programming language runtime – unexpected results may occur, which is why I’ve posted my compiled extension AS-IS with a “Use at your own risk” disclaimer.

    Microsoft-SQL-Server

    How To Install Microsoft SQL Server ODBC Driver for Linux on Ubuntu Server

    UPDATE: I’ve included a list of items to consider when connecting to a Microsoft SQL Server from Linux here. Please review this if you’re starting out and don’t fully understand your possibly choices (using your programming language of choice’s driver, using the Microsoft ODBC driver for linux, or possibly using an open-source driver such as freetds).

    UPDATE 2: I’ve included a new link to a tutorial I wrote on how to install an updated unixODBC on Ubuntu Server.

    Need to connect to a Microsoft SQL Server on Linux? Your best bet is to use their ODBC drivers (available here) – but OH NO – they’re only supported via Red Hat Enterprise Linux.

    No fear – of course they’ll work on most 64bit distributions.

    Install Instructions for Ubuntu Server 12.10 – this should actually work for most Debian/Ubuntu distributions that have packages available for openssl-1.0.0 and unixODBC 2.3.0 (though you may want to download and install 2.3.2 for better performance):

    The following numbered steps have become mostly unnecessary. I’ve fixed the Microsoft scripts so you no longer have to use “–force” to install the driver on Ubuntu and create all the symlinks yourself.
    Those fixed scripts are available here: Microsoft SQL Server ODBC Driver 1.0 for Linux Fixed Install Scripts.
    To use the modified scripts you can:

    To install the driver manually:

    1. Visit http://www.microsoft.com/en-us/download/details.aspx?id=36437 and download the file for “RedHat6\msodbcsql-11.0.2270.0.tar.gz”. Currently you can use the following command until the link changes:
      $ wget http://download.microsoft.com/download/B/C/D/BCDD264C-7517-4B7D-8159-C99FC5535680/RedHat6/msodbcsql-11.0.2270.0.tar.gz
    2. Extract the tarball:  tar -zxvf msodbcsql-11.0.2270.0.tar.gz
    3. Download and Install unixODBC 2.3.0+ if you haven’t already
    4. change to the new directory ( cd sqlncli-11.0.1790.0 ) and run the install script:
      sudo bash install.sh install --accept-license --force
    5. make sure the SQL Server dependencies are installed:
      sudo apt-get install openssl libkrb5-3 libc6 e2fsprogs
    6. Create some symlinks so everything works with the paths these binaries are expecting to find libraries:
      • sudo ln -s /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /usr/lib/x86_64-linux-gnu/libcrypto.so.10;
      • sudo ln -s /lib/x86_64-linux-gnu/libssl.so.1.0.0 /usr/lib/x86_64-linux-gnu/libssl.so.10;
      • sudo ln -s /usr/lib/x86_64-linux-gnu/libodbcinst.so.2.0.0 /usr/lib/x86_64-linux-gnu/libodbcinst.so.1;
      • sudo ln -s /usr/lib/x86_64-linux-gnu/libodbc.so.2.0.0 /usr/lib/x86_64-linux-gnu/libodbc.so.1

    And that should be it. The last two steps are dependent on unixODBC-2.3.1 or higher. If you’re using 2.3.0 (please upgrade) you’ll need to link against the “1.0.0” libraries.

    Test your install by connecting to your server using sqlcmd (sqlcmd -S my.sql.server.com -U username) to make sure everything is ok. You should now be able to configure ODBC to use the MS SQL ODBC Driver for Linux on Ubuntu.

    Yes – in that last steps we’re creating  symlinks from “2.0.0” with a link names “.so.1” – masquerading as the older version 1 doesn’t seem to hurt anything in any of my installations. According to the unixODBC changelog: “Major change is to change the library version number from 1 to 2 to signal the SQLLEN change for 64 land. Should have been done for 2.3.0, but better late than never. So if after installing you have apps that can’t find libodbc.so, its likely they are linked to libodbc.so.1, so just create a symlink from libodbc.so.2”

    So I think we’re OK.

    Virtual Box

    VirtualBox Headless (backgrounded) on Windows

    So simple, yet often passed over, this feature is ridiculously easy to use and yet almost everyone I talk to about it has no clue that this feature exists.

    To the poor saps like myself that must use Windows at their current job position – or to the masochistic folks who use Windows to develop on at home…two shortcuts and your backgrounded ‘headless’ Linux server needs will be fulfilled:

    1. Create your Linux VM as you would normally, configuring necessary system resources, etc, and installing the OS – make sure you configure a NAT, Bridged, or Host-only network adapter during creation.
    2. Install and configure OpenSSH server as prescribed by your OS/package manager – often this can be configured during install as a default package/configuration step.
    3. Note the IP address of the network adapter you plan on connecting to (ifconfig -a).
    4. Power Down the VM.
    5. Create a shortcut on your desktop (which can be dragged to your start menu or where ever you want this thing) with the “location of the item” or shortcut “target” is set to something like this:
      "[FULL_PATH_TO_VIRTUALBOX_DIRECTORY]\VBoxManage.exe" startvm [NAME_OF_VM] --type headless
      Fill out the path and VM name as necessary – on my system VirtualBox is installed to "C:\Program Files\Oracle\VirtualBox" and I named my VM ‘dev2’ during the ‘name and operating system’ phase of creating my virtual machine Step 1.
    6. Create a second shortcut in the same way, replacing the VirtualBox path and VM name as necessary, this time with the target:
      "[FULL_PATH_TO_VIRTUALBOX_DIRECTORY]\VBoxManage.exe" controlvm [NAME_OF_VM] savestate
    7. You can now start and ‘save state and power off’ your virtual machine via shortcuts on your desktop. Use your favorite terminal application to connect to the IP you noted in step 3, and you’re all set.

    It looks like a lot of steps, but it’s quick and painless if you already have a vm setup, as you just need to know the IP address and the name you’ve given the VM in the VirtualBox Manager and you can quickly create the two shortcuts yourself.

    If for some reason you can’t connect to your virtual machine after it has powered on – you’re probably connecting to a network adapter with an IP address dished out via DHCP. Just open the VirtualBox manager, power it on with the ‘Start’ button, and use the console window to login and grab your new IP – or save yourself the headache and connect via a host-only network with a static IP.

    Ubuntu Phone

    Ubuntu Phone

    Via Canonical’s Pre-CES Announcement and Ubuntu Phone landing page.

    I thought this was interesting, as it fits right in with my “way of the future” dreams back when I got my first Palm OS phone.

    “These will one day replace my computers…”

    While that idea doesn’t necessarily seem to fit for 100% of the users anymore, the idea of “one device to rule them all” is still tantalizing and is already becoming a reality for many iPad users as they forgo their “desktop” OS’s entirely. Unfortunately for me, I’m a developer – so I may not see this future, though us developers can always dream (with an iPad mini wifi+cellular and my phone for calls, I can get close and technically could do remote break-fix web work in a pinch if I had to).

    The developer tools look pretty interesting, as the native toolkit seems to derive from QT5 and the Ubuntu QML implementation. I haven’t had a chance to dive into the API documentation yet, nor do I know if I will – as this could easily go the way of the Motorola Atrix’s webtop (everyone jumped on that bandwagon right?).

    The gestures very much remind me of what I loved from Palm’s WebOS and the openness(?) reminds me of what everyone was chanting when Open Handset Alliance announced Android. The main difference here is that Canonical has shown its ability to work with major manufacturers while also keeping their dedication to the platform and what Ubuntu/Linux stands for. Meanwhile Google has enforced restrictions on those who wish to have the full suite of Google Apps available to their users while also showing some shady behavior in regards to Android deals and distribution. The Open Handset Alliance page hasn’t had a news item since 2011, which just furthers my beliefs that the Alliance is really more of a cult following.

    For the masses, the Ubuntu phone could mean a stab at Google’s reign with Android as users become increasingly irritated with Google’s constant collection of their data and invasion of privacy, or simply a reasonable alternative for those that cling to AOSP roms for their handsets in order to run a modern operating system when the manufacturer fails to produce a promised update (or takes forever to do so).

    For Linux users, the Ubuntu phone could finally mean a handset coming to market that more accurately represents their desires of openness and freedom which could prove to be a worthy alternative to Android or iOS for their mobile phone/tablet needs.

    Speculations aside, with RIM flailing, Nokia seemingly following suit, and Windows Phone barely making a dent in the market, some fresh competition by a player big enough to “bring it” is much-needed.

    20121218-123209.jpg

    The year of the Linux desktop…or something (2012 edition)

    UPDATE: I’ve done a few more ‘Linux idiot’ test installs to check difficulty, and completeness as a desktop OS on our Laptop and Ubuntu 12.10 with very minimal modification works great now with the updated packages. I may post another update regarding the modifications I’ve made to have it perform smoothly on our ‘2010-era’ laptop.

    Every year you hear it.
    It usually starts out with something like “Dell releases new PC with Ubuntu Linux option,” which quickly escalates in the industry to the resounding chorus chanting date(‘Y’).” Is the year of the Linux Desktop.”

    Being familiar with this beating drum, and having once fallen prey to it when I was 21 (with the release of Ubuntu 4.10), I’ve gone from testing out many distributions a year, to testing only one.

    One distribution a year, as a “desktop” Operating System.
    I can’t really set aside much more time than that anymore, as I usually do full-switch for at least two days lasting up to a few weeks depending on various factors.
    For my very unscientific and opinionated test, I try my hardest to forget my 14 or so years of Linux experience and approach the entire process as a complete novice. For my testing distribution, I have a tendency to pick whatever the most popular distribution is at the time in hopes to have the greatest hardware/software/community support, though this year I just defaulted to the Ubuntu desktop.

    I’ve kept up a similar routine since roughly 1998 when I first tried Slackware Linux after seeing it running on a friend’s repurposed desktop computer serving files, websites, email, and eventually a Quake 3 Arena server.
    It’s funny that after all these years the two things I remember most about this person was that he had an uncapped cable modem, and that his brother inadvertently introduced me to Linux (I honestly can’t even remember the guy’s name).

    After installing Ubuntu 12.10, I quickly started to notice some hilarious issues. I had installed on a laptop, and I noticed that if the CPU stepped down its clock speed, the GUI started chugging a bit. After some more research (I had already installed the newest Nvidia drivers for the laptop’s discrete GPU), running the processor at the highest stepping level seemed to fix some of the issues. I promptly switched from Unity to Gnome, and saw different problems (now with audio, with the only change being that I ran Gnome at my user login vs Unity). Things got ridiculous, and I quickly grew tired of the time I had spent tinkering and diagnosing so I tried Lubuntu.
    Lubuntu works perfectly, though now looks suffer due to the minimalist lxde desktop environment (yeah, ATM machine, I know) and less visual customization in regards to window/desktop effects that a user such as my girlfriend might expect in 2012.

    For me, Ubuntu is not currently an option for a laptop (or even as a desktop OS for an average user if a single component does not work upon initial install), and furthermore, the steps required to diagnose these issues and even get to this point are quite ridiculous (for an average user). I understand if I was looking for a better “out of the box” experience regarding codecs and “non-free” packages, I should’ve tried Linux Mint, though with a quick search of their forums it looks as though their Ubuntu->Debian variant has many of the same (laptop related) issues, though less issues specifically due to the Unity desktop or Gnome 3 due to their usage of Mate and Cinnamon (maybe next year).

    I can only hope more outside (novice) user experience testing is part of the major testing and development strategies within these open-source communities and organizations (if not now, soon), as it seems that they still count on power users and Linux enthusiasts being their only users in 2012.

    This was the year I was going to switch my remaining “PC” family members over to a Linux variant from Windows XP to finally rid myself of having to clean up malware during every family gathering. Though after this year’s (unscientific and opinionated) Linux desktop testing, finding quirks with power management, laptop specific quirks, quirks with audio, and the ever present error reporting quirks, this switch will have to wait for 2013. And while I might get comments like “Why not use nagios and monitor their syslog and smnp events, and remotely login to help them with VNC, and..and..” – because I want to remove myself from the equation as much as possible and empower the users (my family) to figure out their basic tasks as much as they can on their own. I keep hoping for a more “free” method (cost and liberty), so I’ll keep trying once a year (or more as time permits).

    So here’s to 2013 – the year of the Linux desktop (or the year I break down and buy the remaining PC family members iPads).