Tuesday, October 30, 2018

Accessing Teradata from SQL Relay


Teradata, the company, has been around, in one form or another, since 1979. Their EDW-focused database has been around since the mid-80's. That's years before "data-warehouse" was even a term. I always love when something has been around longer than the word for what the thing is.

In the late 90's I remember reading that Wal-Mart was data-mining more than a terabyte database of customer info. At the time, I couldn't even get my mind around how much data that was. I couldn't even imagine the storage array. Turns out, that was a Teradata database, and it had apparently been running for years (since 1992) when I finally heard about it. In the early 90's, I'm sure the term "terabyte" technically existed, but I doubt many people had actually spoken the word.

Pretty amazing!

I recently needed to get SQL Relay talking to a Teradata database. A quick search revealed an Express version and ODBC drivers for every conceivable platform, so I figured it would be straightforward. It turned out to be reasonably straightforward, but I did hit a few snags worth mentioning to anyone else pursuing the same goal.

Details follow...

Teradata Express

Teradata Express is available as a VMware image. You have to create an account to download it, but once you do, you're presented with a 7z file that you can extract on linux as follows:

7za x TDExpress16.20.12.01_Sles11_20180620112938.7z

It expands into a directory named TDE16.20.12.01 which contains a vmx file and a couple of vmdk files. Running it is as simple as aiming VMware Player or Workstation at the vmx and clicking Start. It requires 4G of memory to run though, so make sure to shut down or suspend whatever else you might have to to free that up before you actually click Start or your system will be paging for the next 10 minutes.

When you start it up, VMware will ask you if you moved or copied the VM. In this case, it's safe to click either "I Moved It" or "I Copied It". "I Copied It" will just assign the NIC a new virtual MAC address.

The OS is apparently SuSE Enterprise 11, and VMware tools is already installed. It boots up to a super primitive-looking X login, but after you log in, the desktop appears to be Gnome 2 with a few SuSE customizations.

The root password is "root". Poking around, there appear to be other logins too, but I haven't tried any of them.

The database starts by default. If you manage to stop it, there's an icon on the desktop to restart it.

The database also comes configured with a user named "dbc" with password "dbc".

There's an icon for the Teradata Studio Express on the desktop. It's a full-featured graphical database shell, like Toad, or RazorSQL, or other similar tools.

It appears to be based on Eclipse too, which I thought was neat. I'd long heard that Eclipse isn't just an IDE, but rather a framework for building generic IDEs, and Eclipse-proper is more like a flagship-example of the technology. I'd never actually seen (or noticed) another Eclipse-framework-based tool though, until this one.

You can create a new database connection to the local server using:

  • Connection Profile Type: Teradata
  • Database Server Name: localhost
  • User Name[Domain]: dbc
  • Password: dbc

After connecting, you can run queries and play around, in general.

If you want to create another user, you can log in as dbc and run a command like:

create user testuser as password=testpassword perm=524288000 spool=524288000;

The perm and spool arguments are required. The perm parameter specifies the maximum amount of permanent storage allocated to the user. The spool parameter specifies the amount of "spool space" allocated to the user. It's not immediately clear what either of these parameters mean, but they're both required. I don't remember where I even got those sizes from, probably from some example online, but I haven't run into any problems with them yet, in my Express instance.

One quirk though... After running the command, the UI replaces the password with *'s, and then displays a red X to the right of the query. Mouse-overing the red X pops up an error. This may lead you to think that the query failed. It didn't. Rather, the error is just complaining about the *'s being invalid SQL. It kind-of makes sense, but it's confusing.

To drop a user, log in as dbc and run a command like:

drop user testuser;

Various Teradata command line tools are also installed, like bteq (the command line db shell), fastload, and tdload. If you want to write some programs, ODBC and JDBC drivers are also installed, and the VM comes with gcc 4.3 and Oracle Java 8. ODBC doesn't appear to be configured though.

I'm sure that the database has various limits imposed on it. I don't know what they are offhand, or whether I've even run up against them or not. If you're just interested in getting familiar with the technology though, it's a pretty good environment.

The virtual NIC is configured to grab an address from DHCP by default, but you'll probably want to give it a static IP if you plan on hitting the database from another machine. Just navigate to Computer -> Control Center -> Network Settings to access the network configuration tool. The only quirky bit is that the VM is apparently configured without a hostname, and the tool is rather adamant that you give it one.

In case you want to access the VM remotely, the ssh server is running, and root logins aren't disabled.

Installing Teradata ODBC

I wanted to access the database from SQL Relay on a remote machine - another VM, running Fedora 26. The most straightforward way to do this seemed to be to install and configure the Teradata ODBC Driver for Linux and then configure SQL Relay to use that.

The driver seems semi-straightforward to install. BUT! It tends to inadvertently sabotage your existing ODBC infrastructure, if you have one. So, it actually requires a bit of special handling.

The problem is that it contains its own copies of:

  • /lib/libodbc.so
  • /lib/libodbcinst.so
  • /lib64/libodbc.so
  • /lib64/libodbcinst.so

I guess this is so that it doesn't have to depend on an existing unixODBC installation. I don't know. All I know is, that if you install the software, it will overwrite those files, and the new files tend to break previously-working ODBC configurations.

So, if you have an existing unixODBC installation, then to safely install the Teradata driver alongside of it, you have to move those files out of the way, do the installation, them move them back. The Teradata driver appears to work fine with the libraries provided by unixODBC, at least in Fedora 26.

If you have an existing unixODBC installation, here's what you have to do to install the Teradata Driver:

If you have 32-bit unixODBC installed:

cd /lib
sudo mv libodbc.so libodbc.so.save
sudo mv libodbcinst.so libodbcinst.so.save

If you have 64-bit unixODBC installed:

cd /lib64
sudo mv libodbc.so libodbc.so.save
sudo mv libodbcinst.so libodbcinst.so.save

If you're running some version of Linux other than Fedora, then you'll have to find and move the appropriate libraries for your platform.

To actually install the driver:

tar xfz tdodbc1620__linux_indep.
cd tdodbc1620
sudo ./setup_wrapper.sh

Hit return to allow it to install in /opt

Afterwards, it will run unattended and install tdodbc1620- Oddly, this rpm is marked "noarch" but it actually installs binaries for both x86 and x64 platforms.

Post-install, you have to move the newly installed libodbc.so and libiodbc.so links out of the way, as follows:

cd /lib
sudo mv libodbc.so libodbc.so.teradata
sudo mv libodbcinst.so libodbcinst.so.teradata
cd /lib64
sudo mv libodbc.so libodbc.so.teradata
sudo mv libodbcinst.so libodbcinst.so.teradata

Then, if you have 32-bit unixODBC installed:

cd /lib
sudo mv libodbc.so.save libodbc.so
sudo mv libodbcinst.so.save libodbcinst.so

Or, if you have 64-bit unixODBC installed:

cd /lib64
sudo mv libodbc.so.save libodbc.so
sudo mv libodbcinst.so.save libodbcinst.so

And that is it. The ODBC driver for Teradata is now installed.

Configuring Teradata ODBC

Configuring the Teradata ODBC Driver is a lot simpler than installing it.

You don't have to add anything to /etc/odbcinst.ini, just append a DSN like the following to /etc/odbc.ini

# This key is not necessary and is only to give a description of the data source.
Description=Teradata Database ODBC Driver 16.20

# Driver: The location where the ODBC driver is installed to.

# Required: These values can also be specified in the connection string.

# Optional
SessionMode=System Default
UPTMode=Not set

The most important parameters are:

  • DBCName - the hostname or IP of the database ( in my case, but maybe different in your environment)
  • UID - the username to log in to the database with (testuser in my case, but could also be dbc or another user)
  • PWD - the password corresponding to the UID

The rest of the parameters do various things which can be researched online, but aren't critical to change for general operation.

Once configured, you can connect to the database using isql, provided by unixODBC.

$ isql teradata
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
SQL> select 1
| 1   |
| 1   |
SQLRowCount returns 1
1 rows fetched

If that works, then the Teradata ODBC Driver has been configured successfully.

Configuring SQL Relay

The ultimate goal here is to access Teradata through SQL Relay. After getting everything else working, that last bit is pretty simple. Assuming SQL Relay is already installed, all you have to do is update the configuration file (either sqlrelay.conf or a file in sqlrelay.conf.d) with a teradata instance, as follows:

<?xml version="1.0"?>

    <instance id="teradataexample" dbase="odbc">
            <user user="exampleuser" password="examplepassword"/>
            <connection string="dsn=teradata;user=testuser;password=testpassword;autocommit=yes"/>


In the connection tag, the dsn option must match the DSN defined in /etc/odbc.ini, and the user/password options must match the UID/PID definied in that DSN.

The user/password defined in the user tag are the user/password that you'll use to log into SQL Relay itself.

To start it up:

sqlr-start -id teradataexample

To access the database:

$ sqlrsh -host localhost -user exampleuser -password examplepassword
sqlrsh - Version 1.4.0
 Connected to: localhost:9000 as exampleuser

 type help; for help.

0> create table test (col1 int, col2 varchar(200));
 Rows Returned   : 0
 Fields Returned : 0
 Elapsed Time    : 0.063005 sec

0> insert into test values (1,'hello');
 Rows Returned   : 0
 Fields Returned : 0
 Elapsed Time    : 0.029303 sec

0> select * from test;
col1 col2 
1    hello

 Rows Returned   : 1
 Fields Returned : 2
 Elapsed Time    : 0.030262 sec

0> drop table test;
 Rows Returned   : 0
 Fields Returned : 0
 Elapsed Time    : 0.077361 sec

0> quit;

To shut it down:

sqlr-stop -id teradataexample

If all of that worked, then you can now access Teradata from SQL Relay.


SQL Relay -> ODBC -> Teradata is generally usable, but ODBC drivers are quirky, as a rule, so there are probably things that don't work correctly. I've already discovered a few esoteric ones and I'll be updating the ODBC connection module with Teradata-specific workarounds as appropriate.

If you run into anything weird, please report it to support@firstworks.com.


Tuesday, October 23, 2018

Wrangling mydumper and Auto-Increment Columns

One of my client's apps has a large MySQL database that I regularly have to get a dump of and reload a local database from the dump for dev and testing. To expedite this process, I recently started using the excellent mydumper/myloader in place of the venerable mysqldump. It does shave considerable time off of the reload process, but I got strange results the first few times I tried to use it.

Basically, there were a few tables that look like this in production:

col1  |  col2
  0   |  some value
  1   |  some other value
  2   |  some third value
 ...  |  ...and so on...

When I'd use mysqldump to dump them, and source the dump to load my local db, it would reliably dump/load the same data, in the same order.

When I'd use mydumper/myloader, I'd get something like:

col1  |  col2
  1   |  some other value
  2   |  some third value
 ...  |  ...and so on...
  20  |  some value

The first row (with col1 = 0) would end up at the end of the table and col1 would have the wrong value. The app really cares too, so I had to figure out what was happening, and how to fix it.

This took a while, but the problem ultimately came down to:

  • col1 is an auto-increment column
  • mydumper and mysqldump both output instructions to:
    • create the table
    • reset the next-auto-increment value to whatever it was in the database that is being dumped
    • insert rows, using the exact values from the database that is being dumped
  • by default, if you insert a 0 into an auto-increment column, mysql substitutes the next auto-increment value for the 0
  • mysqldump outputs one big .sql file to create tables and insert rows, and it includes instructions to disable this behavior
  • mydumper outputs lots of individual files (for each table - a script to create the table, and a binary full of data to load into it) but none of them contain instructions to disable this behavior

So, basically if you source a script created by mysqldump, if col1 = 0, then you get a 0 for col1, but if you use myloader to load the output of mydumper, if col1 = 0, then you get whatever the next auto-increment value is. For all other values of col1, both work as expected.

Working around the problem was tricky. There's no obvious way to add a SET SESSION sql_mode='...,NO_AUTO_VALUE_ON_ZERO' to the mydumper binary data file. I thought about adding it to each of the table-create scripts, but I wasn't sure if the data-load would necessarily be run in the same session as the create, so the SET SESSION might not even be in effect when the data was being loaded.

I could have run a SET GLOBAL sql_mode='...,NO_AUTO_VALUE_ON_ZERO' prior to the dump, but I reboot the system all the time and I'd certainly forget to re-run it the next time, and even if I remembered, I'd forget exactly what it was that I needed to run. The only sure-fire solution was to alter the sql_mode of the server itself.

This involved adding a line to /etc/mysql/my.cnf, in the [mysqld] section, like this:


...and bouncing the server. Actually, there's probably some way to get the server to reload the options, but I could easily bounce mine, so I did.

The NO_AUTO_VALUE_ON_ZERO part was the operative part, but I had to add the other options because they were apparently already in effect, according to:

select @@sql_mode

The sketchy part of this solution is that I'm not 100% sure that somewhere deep in that app (or in another app that uses the same DB), there isn't some query that relies on NO_AUTO_VALUE_ON_ZERO being disabled. So, arguably, this isn't the best solution. I don't know of a better one though, so for now I'm going with this.

Ideally, it would be great if mydumper handled this, per-session, itself. I think I'll submit a feature request...

(Update: looks like there's already an open issue for this: https://github.com/maxbube/mydumper/issues/142)

PPTP on CentOS 7

I recently needed to get a PPTP VPN connection working from a CentOS 7 machine to one of my clients. I'd gotten the same working from MINT Linux a few weeks earlier, and it was as simple as clicking the network icon on the toolbar, selecting VPN Connections -> Configure VPN, and adding a PPTP connection. I figured it would be equally simple on CentOS 7, but I was surprised to find that the list of VPN types didn't include PPTP.

This was confusing because NetworkManager-pptp and NetworkManager-pptp-gnome packages were both installed. Long story short... As of 10/23/18, CentOS 7 uses NetworkManager 1.10 and compatible versions of the PPTP-related RPMs aren't available in the CentoOS repos. If you want to get it working, you have to build and install new enough RPMs from source.

Steps to accomplish this follow...

Make sure NetworkManager-pptp and NetworkManager-pptp-gnome are installed, just to be sure that you have various dependencies installed as well:

sudo yum install NetworkManager-pptp NetworkManager-pptp-gnome

Set up an RPM-build environment (if you don't already have one set up). This is most easily done by installing the fedora-packager meta-package and running a handy script that it provides:

sudo yum install fedora-packager @development-tools

Note that rpmdev-setuptree wasn't run using sudo. This script should be run as you. It basically creates a .rpmmacro file and rpmbuild tree under your home directory.

Download a new enough source RPM for NetworkManager-pptp from somewhere. A new enough version is included in Fedora Core 28, for example:

wget https://dl.fedoraproject.org/pub/fedora/linux/updates/28/Everything/SRPMS/Packages/n/NetworkManager-pptp-1.2.8-1.fc28.src.rpm

Install packages that you'll need to build the RPMs:

sudo yum install glib2-devel gtk3-devel NetworkManager-libnm-devel libnma-devel ppp-devel libtool intltool libsecret-devel NetworkManager-devel NetworkManager-glib-evel libnm-gtk-devel

Build the RPMs:

rpmbuild --rebuild NetworkManager-pptp-1.2.8-1.fc28.src.rpm

And, finally, install the RPMs that you just built:

cd ~/rpmbuild/RPMS/x86_64
sudo yum install NetworkManager-pptp-1.2.8-1.el7.x86_64.rpm NetworkManager-pptp-gnome-1.2.8-1.el7.x86_64.rpm

(Of course, this assumes that you're building on an x86_64 host. Adjust the path and file names accordingly if you are not.)

At this point, if you try to create a new VPN connection, you should see "Point-to-Point Tunneling Protocol (PPTP)" in the list of available VPN types.

You can also safely remove NetworkManager-pptp-1.2.8-1.fc28.src.rpm at this point. If you don't think you'll be building any more RPMs any time soon, then you can also remove ~/.rpmmacros and the ~/rpmbuild tree. If you find that you do need to build more RPMs, then you can just rpmdev-setuptree again to re-setup the RPM-build environment.

Good luck!

Tuesday, September 11, 2018

SQL Relay 1.4.0 Release Announcement

Version 1.4.0 of SQL Relay, the powerful connection pool, load balancer, query router, and database proxy, is now available.

Like the last release, this release is mainly focused on integrating recent improvements and customizations, contributed by or developed for various clients into the official distribution.

Notable Changes and Improvements

dbase="mariadb" is now supported. You don't have to use dbase="mysql" when running SQL Relay against mariadb any more.

sqlr-import can import CSV files now.

sqlr-export correctly escapes double-quotes now.

Query translations are applied to the select-database query now.

The MySQL/MariaDB connect string supports an api=stmt/classic option now that selects which MySQL API to use. When api=classic, the mysql_*() functions are used and binds are faked. When api=stmt, the mysql_stmt_*() functions are used and binds are supported natively. It turns out that the classic API is faster, even with modern MySQL, so if you don't specifically need the functionality of the stmt API, then api=classic might work better for you. When using the MySQL Front-End Modules, you can even point an app that uses the stmt API at SQL Relay which is configured to use api=classic, and vice-versa.

The ODBC connect string supports a unicode=yes/no option that makes it possible to disable unicode with drivers that don't support it, such as the Cloudera Impala ODBC driver.

There are a lot of other changes too, but everything else is subtle or behind-the-scenes.

Full ChangeLog follows:

  • odbc connection module - trace=yes/no/default now instead of just yes/no
  • pushed detachbeforelogin parameter up to sqlrserverconnection class
  • updated to use rudiments container clearAndDelete...() methods
  • fixed nonsensical uint32_t majorversion>=0 comparison in freetds conn
  • fixed nonsensical uint16_t isnull==-1 comparison in odbc conn
  • fixed return NULL in method that returns false in router conn
  • added manual removal of build directory in nodejs clean target
  • updated nodejs clean target with - in front of node-gyp clean
  • fleshed out the installing-sqlrelay-from-packages doc
  • connect string "timeout" -> "connecttimeout" in docs
  • dbase="mariadb" works now
  • docs have been updated to acknowledge mariadb
  • added configure test to disable ODBC on platforms without SQLULEN
  • updated mysql_config test to add -L/usr/local/lib if it fails, to deal with libiconv, which is located there, on newer freebsd
  • updated mysql drop-in test not to test def_length if def is NULL
  • added NODEGYPPYTHON flag that can be set manually on netbsd
  • fixed gcj/gcj3/kaffe detection on older systems
  • fixed some buggy static module declarations/assignments
  • sqlr-import can import csv files now
  • sqlr-export correctly escapes double-quotes now
  • query translations apply to the select-database query now
  • added api=stmt/classic option to mysql connection module
  • added a per-cursor bindpool
  • added unicode=yes/no opiton to odbc connection module
  • fixed a bug that could cause stale column metadata to a protocol module between prepare and execute when binds are being faked
  • added stalecursors logger
  • added send()/recv() passthrough operations
  • made various bind-related buffers per-cursor
  • added pattern option to query notification event

Rudiments 1.1.0 Release Announcement

Version 1.1.0 of Rudiments, the C++ class library for developing systems and applications, is now available.

This release features some fairly significant restructuring of the SAX and DOM related classes. In particular, xmlsax has been genericized into sax, xmldomn has been genericized ingo dom, and xmldomnode has been genericized into domnode. The xmlsax/xmldom classes have been reworked to inherit from sax/dom. The jsonsax/jsondom classes have been added to manipulate JSON objects. The csvsax/csvdom classes have been added to manipulate CSV files. And, finally, the xmldomevents class has been renamed to just domevents, as it can be used with any class that inherits from dom.

The bytebuffer class has also been overhauled and is now implemented similarly to std::basic_string, with similar performance characteristics.

The url class supports HTTP POST now.

Otherwise, there are a ton of internal fixes and improvements.

ChangeLog follows:

  • fixed some dictionary and xmlsax memory leaks
  • fixed lib/lib64 detection in configure script
  • fixed Werror management in configure script
  • fixed race condition in threadmutext test
  • --enable-built-in-regex works on uw7 now
  • added regularexpression match-with-length methods
  • fixed file test on OSR and UnixWare
  • disabled chown in file test on Windows (not reliable over cifs)
  • fixed file descriptor passing on OSR
  • updated xmldomnode::print() to use the output interface
  • added various clearAndDelete...() methods to container classes
  • reenabled chat class by default
  • updated process::backtrace() to take an output, consolidated methods
  • added explicit virtual destructors to input/output classes
  • removed unused containerutilinlines.h header
  • improved dlopen/-ldl configure test
  • abstracted sax and dom classes
  • added csvsax and csvdom classes
  • renamed xmldomnode and xmldomevents to domnode and domevents
  • refactored write/print/xml methods of dom/domnode classes
  • refactored some classes to have trivial constructors
  • renamed regularexpression::compile() to setPattern(), added getPattern()
  • renamed memorypool::deallocate() to memorypool::clear()
  • renamed memorypool::allocateAndClear() to memorypool::allocateAndZero()
  • added clear-with-parameter-reset methods and parameter-getters to dynamicarray, bytebuffer, stringbuffer
  • added cursordomnode class
  • added jsonsax and jsondom classes
  • added some http-post options to url class
  • improved performance of charstring::integerLength(), charstring::parseInteger(), and stringbuffer::append(integer)
  • refactored bytebuffer/stringbuffer to improve performance
  • getStringLength() -> getSize() in read() with terminator
  • added safe-to-include-byteswap.h-after-netinet/in.h configure test
  • updated tls code to actually use RUDIMENTS_SSL_VOID_PTR macro
  • moved various madvise/mprotect/mlock methods to sys class
  • added tests for systems that have, but don't define tzset, ftruncate, and fsync
  • various windows spawn() fixes

Monday, May 21, 2018

SQL Relay 1.3.0 Release Announcement

Version 1.3.0 of SQL Relay, the powerful connection pool, load balancer, query router, and database proxy, is now available.

This release is mainly focused on integrating recent improvements and customizations, contributed by or developed for various clients into the official distribution.

ODBC Improvements

As with the last release, the ODBC front and backend have both been improved significantly. Apps like Toad Data Point and RazorSQL should work as expected with SQL Relay now.

PDO Improvements

The PDO driver has been wrung out and updated to be more consistent with other PDO drivers. So, now, if you're migrating a PHP PDO app from connecting directly to the database to using SQL Relay, you should encounter fewer, if any differences in behavior.

"patterns" Query Translation Module

There is now a new "patterns" query translation module. It allows you to match a pattern in a query and replace it with another pattern. Match-patterns can be strings, case-insensitive strings, or regular expressions. It also allows you to nest matches. See patterns for a detailed description.

Column Tables

Many, but not all, databases can return the table name of a column in the result set header. The client-side supports this now, as do the connection modules for various databases.

In classic PostgreSQL fashion, PostgreSQL databases return a table OID by default, and require a "tablemangling" parameter in the connect string to indicate whether to return the OID's, or to map them to actual table names. This is similar to the "typemangling" parameter.

SQL Server only returns table names when a "server cursor" is in use. FreeTDS doesn't support returning the table name at all, so it's just not an option when FreeTDS is being used. The ODBC driver for SQL Server can return the table name, but again, only when a "server cursor" is in use. Unfortunately, server cursors cause results to come back incredibly slowly. So, the ODBC connection module has a getcolumntables="yes/no" parameter. When set to "yes" it basically prepares the query with server cursors enabled, gets the column info, then prepares it again with server cursors disabled, so that when it's executed, results can be fetched quickly. There is a cost to the double-prepare, but it's nothing like the cost of fetching rows. Since there's a cost at all though, it can be disabled.

Modern Platforms

Current versions of Ubuntu and Fedora Linux revealed various subtle compiler problems. These have been remedied and the code should compile cleanly on these systems.

There are a lot of other changes too, but everything else is subtle or behind-the-scenes.

Full ChangeLog follows:

  • updated systemd init file to use SIGINT to kill sqlrelay
  • fixed several "ocurred" -> "occurred" typo's
  • client debug hides password now
  • passing -disable-crash-handler to the scaler passes it to connections spawned by it now
  • passing -nodetach to the sqlr-scaler or sqlr-listener works now
  • added get*Timeout methods to the C++ API
  • various pdo tweaks to make it more consistent with other pdo drivers
  • added rudiments version an compile date/time to output of --version
  • exposed timeouts, client info via custom pdo options
  • timeouts configurable in pdo connect string now
  • dsn parameters can be overriden (or supplied outright) in the connect string now
  • added \r and ; as possible after-variable characters for fake-binds
  • added -short, -connection-detail, and -query options to sqlr-status
  • added initial support for queries that return multiple result sets
  • lots of various odbc connection module improvements
  • added parse failed event
  • updated normalize translation to remove spaces around :=
  • added noop query framework
  • added "patterns" query translation
  • added debug="sql"
  • fixed subtle bug in normalize translation that could cause problems with empty strings
  • moved the memorypool used by the sqlrclient protocol module to allocate memory for bind variables into the sqlrservercontroller and exposed it to modules
  • added docs for tls/ssl backend enc/auth
  • added directives framework and custom_wf directive
  • pushed connect timeout parameter up
  • implemented underpinnings for input/output binds
  • made SQLGetConnectAttr(SQL_AUTOCOMMIT) not return an error
  • fixed configure bug causing --disable-server to disable the odbc api
  • configure.vbs sets versions in vdproj files now
  • fixed some missing code in the documentation
  • replaced toFloat() calls with toFloatC()
  • mysql connection module handles /* ... */ comments correctly when detecting which API (stmt vs. traditional) to use now
  • added php conf type for netbsd
  • added freetds parameter to freetds connection module that sets the FREETDS/FREETDSCONF environment variables
  • fixed some subtle SQLGetData bugs in ODBC driver
  • updated sqlrserver API to support getting the column tables
  • updated odbc, mysql, postgresql, and sqlite connection modules to
  • return column table names
  • updated sqlrclient protocol to send/receive column table names
  • added result set header translation framework
  • added getcolumntables option to odbc connection module
  • ODBC SQLTables switches to the specified catalog before getting the table list now (and switches back)
  • fixed ODBC wchar truncation problem
  • fixed "show columns odbc" bug in mysql connection module
  • mysql connection module uses MYSQL_FIELD.name_length if it can now
  • fixed char * -> const char * bug in python API
  • added support for openjdk-11 to configure script
  • removed some unnecessary parentheses in mysql drop-in library that gcc 8.1.1 doesn't like

Rudiments 1.0.7 Release Announcement

Version 1.0.7 of Rudiments, the C++ class library for developing systems and applications, is now available.

This is mainly a bug-fix release.

The most significant new feature is migration of the templateengine class from an older project into Rudiments. Also, there are 2 new "interface" classes: input and output. The filedescriptor class now implements both. The bytebuffer class now implements output.

VS2017 is also formally supported now as well.

ChangeLog follows:

  • updated threadmutex, filedescriptor, file, signalhandler, signalset, and dynamciclib destructors to be less disatsterous in the event of a double-free
  • added sys::getRudimentsVersion()
  • added empty-string detection to charstring::isInteger()/isNumber()
  • added charstring::toFloatC()
  • wrapped _access_s call with null/empty-string detection
  • added to codetree
  • added various replaceFirst/replaceAll methods to charstring
  • fixed some subtle codetree bugs that could cause parsing of a concatenation to succeed at eof inappropriately
  • updated xmldomevents to support multiple events per xmldomnode
  • moved codetree nonterminal-build to codetreegrammar
  • added nonterminal definition dictionary to codetree
  • added templateengine from stencil
  • added input/output interfaces
  • made filedescriptor an input/output
  • made bytebuffer an output
  • configure.vbs sets versions in vdproj files now
  • added config_vs2017.h and v2015 detection to configure.vbs
  • added keepalive to http urls
  • added # to characters that must be http-escaped
  • fixed debugprint macros to only print if the file could be opened

Wednesday, April 25, 2018

Microsoft ODBC Driver 17 for SQL Server on Linux

The Microsoft ODBC Driver 17 for SQL Server has been available for Linux for a while now, but I recently had a bit of trouble getting it to work with SQL Relay on CentOS 6.9. Just in case anyone else has had similar trouble, here's what I ran into, and here's how I fixed it.

The CentOS/RHEL 6 RPM's are available at https://packages.microsoft.com/rhel/6.8/prod/.

The driver itself is msodbcsql17- but if you try to install it using yum localinstall or rpm -i then it will complain that it needs a newer version of unixODBC than what is available for CentOS/RHEL 6. Version 2.3.1, in particular.

Fortunately, there are some unixODBC-2.3.1 rpms available in the same directory. Sort of...

Actually, there's a unixODBC-devel-2.3.1 rpm and some odd unixODBC-utf16 rpms, but the requested unixODBC-2.3.1 rpm is missing.

It turns out that a unixODBC-2.3.1 package was there at one point, but has since been removed. It's kind-of good that it was removed because it didn't always work. It worked with isql, but any calls to SQLConnect() by SQL Relay (and presumably by other apps) would reliably hang. What can we do though? The msodbcsql17 rpm requires unixODBC-2.3.1.

Well, those utf16 packages actually do work with msodbcsql17, and don't cause SQLConnect() to hang, but getting everything to install requires a little finesse.

I was able to get everything to install by running:

yum remove unixODBC unixODBC-devel
yum localinstall unixODBC-utf16-*
rpm -i --nodeps msodbcsql17-

The --nodeps option is important because it enables the rpm to install without checking dependencies.

After that I reconfigured, rebuild, and reinstalled SQL Relay and everything worked for me.

Wednesday, April 11, 2018

Updating Ubuntu 17.04

I recently attempted to apt-get upgrade an Ubuntu 17.04 VM that I hadn't upgraded in a while, and got all kinds of errors like:

N: Updating from such a repository can't be done securely, and is therefore disabled by default.

It took me a surprising amount of time to discover the solution. Apparently this typically happens when a mirror has some problem that makes the updates unreadable, and I ran into lots of forums explaining how to fix that. Unfortunately, that wasn't my problem. My problem was a lot more legitimate.

Ubuntu 17.04 reached EOL in mid January of 2018, and so us.archive.ubuntu.com and security.ubuntu.com don't host updates any longer.


Well, the solution is simple. Edit /etc/apt/sources.list and replace us.archive.ubuntu.com and security.ubuntu.com with old-releases.ubuntu.com. Then rerun your apt-get update and apt-get upgrade again.

It guess I never ran into the problem before because other than 17.04, I'm only running LTS releases. 16.04 is still supported, and the other versions are so old that I had to make that change immediately after installation so apt-get update would work at all. I guess I never run into a case where it had been working and stopped.

You learn something new every day.

Monday, January 29, 2018

A Couple of SQL Relay on Ubuntu How-To's

A couple of people have recently asked for some step-by-step tutorials detailing SQL Relay installation and configuration on various platforms. So, there's a new Gadget on the right hand side of this blog called SQL Relay How-To's that will contain these tutorials.

I just added 2 for Ubuntu 16.04.

Check them out!