-
Notifications
You must be signed in to change notification settings - Fork 562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PostgreSQL unit test tries to access non-existent 'precision' attribute #1003
Comments
And for a bonus question: does anyone know why the name for the table schema value is |
I am unable to reproduce your issue.
Really? AFAIK PostgreSQL doesn't speak TDS. |
Sorry, that was copy/pasted from another ticket. I have fixed the description above.
|
All these years using the |
Okay, so apparently it's not strictly a Python 3.10 thing:
Can you run |
The version of unixODBC is 2.3.9. |
The documentation says:
Why would the test be written to look for a column which—according to that documentation—shouldn't even be there? |
Good question. Maybe different drivers deviate somewhat from the ODBC spec. It's certainly one of the columns returned by the PostgreSQL driver:
re: "Maybe different drivers deviate somewhat", above - apparently they do: [pgtests.py] row = results['xΏz']
assert row.type_name == 'varchar'
assert row.precision == 4, row.precision [sqlservertests.py] row = results['xΏz']
assert row.type_name == 'varchar'
assert row.column_size == 4, row.column_size |
Does this mean that the |
It seems so. It's probably based on what SQL Server ODBC does. For reference, here are the columns that PostgreSQL ODBC returns (for me, at least):
|
I would have expected that if I found what seems like a relevant commit for the psqlodbc driver, in which they say they are switching to "ODBC 3 column names for the result set of catalog functions." This commit was made in late August, so if the driver you're running is earlier than that, we may have found the explanation for why you see different results for the tests than I see. |
You're probably right. You are using "psqlodbcw.so 13.02.0000" while I'm using "psqlodbcw.so 12.01.0000". |
Sure enough, the release you're running (12.01.0000) was tagged almost exactly two years ago. The release I'm running (13..02.0000) was released in September (a month after the commit I linked to above). |
It's not 100% clear that I'm reading the pipeline test logs correctly, but if I am the pipeline is running with release 11.01.0000 of psqlodbc, tagged May 2019. |
The problem there is that ODBC drivers apparently feel free to change names and throw in any number of extra columns that may or may not exist in other implementations. (For example, there are 18 pyodbc-documented columns and psqlodbcw.so 12.01.0000 returns 26 columns.) Trying to document all possible variants would be … tedious. :)
Excellent detective work there, BTW. |
TL;DR: The tests should be verifying that the behavior of the software matches the documentation.
Absolutely! That's the very reason that
If you thought I was suggesting that we document all of the variants, then I didn't do a good enough job of explaining what I was hoping for. When I wrote "... it [
If that represents an unacceptably high level of effort (which it may well; this is a volunteer-driven, open-source project, after all), then it would be preferable for the documentation to avoid listing any column names to be found in the
And (assuming this is the road we go down) the unit tests should be testing that I would further recommend that all of the methods and properties which are not part of the Python DB-API 2.0 be clearly identified as package-specific extensions. To be clear, none of the projects I work on use ODBC to talk with PostgreSQL. But when I submit a PR I would like to know that all of the unit tests pass, including DBMS-specific tests for stacks I don't necessarily use myself, when running under code which I have modified. It's not good enough that the tests happen to pass for the stacks used by the CI-pipeline.
Thanks! 😅 |
Confirming that the |
* Add support for Python 3.10, drop EOL 3.5 (mkleehammer#952) * Remove duplicate entry in pyi stub (mkleehammer#979) * Replace deprecated SafeConfigParser with ConfigParser (mkleehammer#953) * Designate connection string as optional (mkleehammer#987) * Fix spelling typos (mkleehammer#985) Co-authored-by: Gord Thompson <gord@gordthompson.com> * Fix for DSN Names with non-ASCII chars (mkleehammer#951) * Fix for DSN Names with non-ASCII chars Fixes: mkleehammer#948 Co-authored-by: bamboo <bamboo@galaxy.ad> Co-authored-by: Gord Thompson <gordon.d.thompson@gmail.com> * Added InterfaceError to pyodbc.pyi. (mkleehammer#1013) Co-authored-by: Benjamin Holder <bholder@rpagency.com> * Upgrade deprecated unicode encoding calls (mkleehammer#792) * Do not include .pyc artifacts in source tarball mkleehammer#742 * Build wheels with cibuildwheels on GitHub Actions Fixes mkleehammer#175 Ref mkleehammer#688 Closes mkleehammer#668 Closes mkleehammer#685 Fixes mkleehammer#441 and pretty much most issues that mention ` sql.h: No such file or directory` This also need to setup some PyPI keys for automated uploads. * Install unixodbc-dev for Linux wheels * Enable GitHub Actions for pull requests * Use Debian based `manylinux_2_24` image * `apt-get` update before installing in wheel build * Use PEP 440 version name required for wheels * Skip building 32-bit wheels * 4.0.dev0 for default version, because test_version() wants 3 parts here Checked this won't shadow released minor version (credit goes to @hugovk) >>> from packaging.version import Version >>> Version("4.0.dev0") > Version("4.0.24") False * Had to use Debian image for PyPy too * Disable PyPy wheels https://cibuildwheel.readthedocs.io/en/stable/options/#build-selection PyPy is missing some C functions that `pyodbc` needs. * Update README.md * Avoid error when testing with DSN= connection Fixes: mkleehammer#1000 * Disable setencoding/setdecoding in tests3/pgtests.py Fixes: mkleehammer#1004 * Adjust test_columns() in tests3/pgtests.py for newer driver versions Fixes: mkleehammer#1003 * Move driver version check out of function * Add comment to _get_column_size() * Fix memory leak with decimal parameters Fixes: mkleehammer#1026 * Create codeql-analysis.yml * Bugfix/sql param data memory leak (mkleehammer#703) * Updated .gitignore * * Created a test file for the specific scenario * * Updated doc of test file for the specific SQLParamData scenario * * Fixed the test file for the specific SQLParamData scenario by Py_XDECREF the PyObject with 1 reference. * * Improved the test to close the cursor and set it to None, then forcing the gc * * Changed the fix of the memory leak and updated the test. * * Removed redundant empty line * * Converted tabs to spaces * * Moved variable out of conn's scope * Update gitignore, remove duplicated * Replace deprecated PyUnicode_FromUnicode(NULL, size) calls (mkleehammer#998) Current versions of Python write a deprecation warning message to stderr, which breaks CGI scripts running under web servers which fold stderr into stdout. Likely breaks other software. This change replaces the deprecated calls with PyUnicode_New(size, maxchar). The accompanying code to populate the new objects has also been rewritten to use the new PyUnicode APIs. * Making pyodbc compatible with PostgreSQL infinity dates, returning MINYEAR and MAXYEAR to python, instead of values out of python's limits * Removing autoformat from code * Removing autoformat from code * Add odbc_config support on mac and m1 homebrew dir * Note EOL of 2.7 support in README (mkleehammer#945) * Fix version of CI generated wheels The CI system is checking out exact tags like "git checkout 4.0.33", which results in a detached HEAD. The version calculation was adding the commit hash. * Fix for mkleehammer#1082 libraries in Linux wheels (mkleehammer#1084) * use argparse instead of optparse (mkleehammer#1089) Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com> Co-authored-by: Alex Nelson <alexander.nelson@nist.gov> Co-authored-by: Kian Meng, Ang <kianmeng.ang@gmail.com> Co-authored-by: Gord Thompson <gord@gordthompson.com> Co-authored-by: bamboo <bamboo@galaxy.ad> Co-authored-by: Gord Thompson <gordon.d.thompson@gmail.com> Co-authored-by: bdholder <benjamin.holder@rocketmail.com> Co-authored-by: Benjamin Holder <bholder@rpagency.com> Co-authored-by: Inada Naoki <songofacandy@gmail.com> Co-authored-by: Michael Fladischer <FladischerMichael@fladi.at> Co-authored-by: Anatoli Babenia <anatoli@rainforce.org> Co-authored-by: Francisco Morales <51379487+jose598@users.noreply.github.com> Co-authored-by: Gord Thompson <gordthompson@users.noreply.github.com> Co-authored-by: Michael Kleehammer <michael@kleehammer.com> Co-authored-by: Gilad Leifman <leifmangilad@gmail.com> Co-authored-by: Bob Kline <bkline@rksystems.com> Co-authored-by: Leandro Scott <lsrzj@yahoo.com> Co-authored-by: Jordan Mendelson <jordan@zenzen.org> Co-authored-by: Keith Erskine <toastie604@gmail.com>
Environment
HEAD
ofmaster
branchIssue
The
test_columns()
unit test inpgtests.py
tries to accessrow.precision
for eachRow
object in the results set returned by the call tocursor.columns()
. This results in an error for the test.According to the documentation,
precision
is not a column in the rows returned bycursor.columns()
.The text was updated successfully, but these errors were encountered: