SQL Anywhere Bug Fix Readme for Version 17.0.0, build 6315
Choose a range of build numbers for which to display descriptions. For example if you want to see what was fixed since the last build you applied then change 1062 to the build number of that last Support Package. Click Update Readme to make those changes take effect. to Update Readme
A subset of the software with one or more bug fixes. The bug fixes are
listed below. A Bug Fix update may only be applied to installed software
with the same version number.
While some testing has been performed on the software, you should not distribute
these files with your application unless you have thoroughly tested your
application with the software.
A complete set of software that upgrades installed security/encryption components
while only updating the SQL Anywhere components to the level of the previously
released build for a given platform.
These are generated so that security/encryption changes can be provided quickly.
If any of these bug fixes apply to your installation, iAnywhere strongly recommends
that you install this fix. Specific testing of behavior changes is recommended.
================(Build #4116 - Engineering Case #812492)================
The version of OpenSSL used by all SQL Anywhere and IQ products has been
upgraded to 1.0.2n.
================(Build #4101 - Engineering Case #812032)================
The version of OpenSSL used by all SQL Anywhere and IQ products has been
upgraded to 1.0.2m.
================(Build #1486 - Engineering Case #798416)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.1t.
================(Build #1443 - Engineering Case #796406)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.1s.
================(Build #1410 - Engineering Case #795323)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.1r.
================(Build #1351 - Engineering Case #793255)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.1q.
================(Build #5732 - Engineering Case #818554)================
MobiLink server now requires a 64-bit operating system for Windows. The 32-bit
server is no longer supported.
================(Build #1310 - Engineering Case #791709)================
A new client network option has been added, “allow_expired_certs”. When set,
MobiLink clients will accept a server certificate that has either expired
or is not yet valid and continue with the synchronization (unless there is
some other problem with the certificate). By default, the sync will fail
in this case with an appropriate error, which was the previous behavior.
================(Build #6073 - Engineering Case #821196)================
For HTTP-based streams, one synchronization can require multiple HTTP requests
and so when using multiple you must either use the Relay Server, or configure
session-based affinity in your load balancer to ensure requests all go to
same MobiLink server. The MobiLink server sets two HTTP headers which can
be used to control server affinity: the ml-session-id header is a UUID that
is unique for each synchronization; the ml-client-id header is unique for
each remote database.
If your load balancer cannot be configured to use either of those headers,
you can now use the session_id_cookie or client_id_cookie stream options
to cause the server to also set a cookie with the given name containing the
ml-session-id or ml-client-id value, respectively. For example
-x http(port=8081;session_id_cookie=JSESSIONID)
The above will set the commonly used JSESSIONID cookie with the value of
the ml-session-id header.
-x http(port=8081;client_id_cookie=JSESSIONID)
The above will set the commonly used JSESSIONID cookie with the value of
the ml-client-id header.
These options are available with HTTP and HTTPS streams.
================(Build #2006 - Engineering Case #796815)================
The .NET 4.0 version of the SQL Anywhere .NET Data Provider has been removed.
Microsoft no longer provides security updates, technical support or hotfixes
for .NET 4.0. Later versions of .NET 4 continue to be supported by the SQL
Anywhere .NET Data Provider.
================(Build #1270 - Engineering Case #789513)================
SetupVSPackage.exe did not register the SQL Anywhere .NET DDEX Provider with
Visual Studio 2015. This problem has now been corrected.
================(Build #6236 - Engineering Case #823517)================
The Linux/UNIX version of the SQL Anywhere and SAP IQ ODBC drivers (for example,
libdbodbc17.so on Linux) now support tracing of entry/exit into ODBC calls.
The following is an example of trace output for a SQLAllocHandle call.
2020/11/13 14:09:58.341000 pid=194620, ppid=16325, thread=194620
Enter SQLAllocHandle:
SQLSMALLINT HandleType 1
SQLHANDLE InputHandle 0x0000000000000000
SQLHANDLE * OutputHandlePtr 0x00007FF7008221E8
2020/11/13 14:09:58.385000 pid=194620, ppid=16325, thread=194620
Exit SQLAllocHandle: return code 0 ( SQL_SUCCESS ):
SQLSMALLINT HandleType 1
SQLHANDLE InputHandle 0x0000000000000000
SQLHANDLE * OutputHandlePtr 0x00007FF7008221E8
[0x00000213ED616590]
To enable tracing in the ODBC driver, you must set two environment variables,
TRACELEVEL and TRACELOG.
TRACELEVEL=NONE | MINIMAL | LOW | MEDIUM | HIGH | ALL
- NONE
No tracing information is printed.
- MINIMAL
Routine name and parameters are included in the output.
- LOW
In addition to the above, return values are included in the output.
- MEDIUM
In addition to the above, the date and time of execution are included
in the output.
- HIGH
In addition to the above, parameter types are included in the output.
- ALL
In addition to the above, process ID and thread ID are included in the
trace output.
TRACELOG=trace-file-log
If TRACELOG is not defined, the output is written to a file called odbctrace
in the current directory of the client ODBC application.
As alternatives to these environment variables, you can use the following.
TRACE=YES | NO
This environment variable option is shorthand for TRACELEVEL=ALL or TRACELEVEL=NONE.
TRACEFILE=trace-file-log
This environment variable option is identical to TRACELOG and matches the
keyword used by the UnixODBC driver.
Important Notes:
The SAP IQ ODBC driver bundled with SAP IQ is not compatible with the SQL
Anywhere ODBC driver. Each driver supports features that are exclusive to
the respective database products. For example, if you use the SQL Anywhere
ODBC driver with the SAP IQ database server then you may get unexpected results.
The SAP IQ ODBC driver name is "SAP IQ". The SQL Anywhere ODBC
driver name is "SQL Anywhere 17".
The trace features described above are not included with the Windows versions
of the ODBC drivers, since the Microsoft Driver Manager supports tracing.
Other Fixes:
The iqdsn utility for creating, listing, and deleting SAP IQ ODBC Data Sources
will create ODBC driver data sources using the SAP IQ driver name and it
will list both old (Sybase IQ) and new (SAP IQ) driver entries. We encourage
you to update older Sybase IQ data sources since the driver name has changed.
The iqdsn utility has been updated to not include "SQL Anywhere 17
? Oracle" driver data sources in its output.
The Microsoft Windows ODBC Data Source Administrator dialog for creating/modifying
SAP IQ/SQL Anywhere ODBC driver data sources has [Help] buttons that will
now open a browser to the SQL Anywhere help topics in the SAP Help portal.
The documentation there is suitable for creating/modifying SAP IQ data sources
as well. Previously, the [Help] buttons took you to non-existent pages.
================(Build #6004 - Engineering Case #820673)================
The OData Server has been upgraded to use Jetty 9.4.24.
================(Build #1448 - Engineering Case #796522)================
The OData Server has been upgraded to use Jetty 9.3.7.
================(Build #6279 - Engineering Case #824705)================
The SQL Anywhere Node.js driver now supports node.js 12.
================(Build #1287 - Engineering Case #790641)================
There are now multiple versions of the node.js drivers shipped for JavaScript
clients and JavaScript external environments, as well as the open-source
driver hosted on github.com and npmjs.com. A driver is now available for
each of the following node.js versions: 0.10, 0.12, and 4.x.
================(Build #6171 - Engineering Case #822188)================
A new function TOLOCALTIME( timestamp-expression ) converts a TIMESTAMP WITH
TIME ZONE value or TIMESTAMP value which is assumed to be in Coordinated
Universal Time (UTC) to a local time timestamp value using the database server
locale's standard time/daylight savings time rules.
================(Build #5787 - Engineering Case #819001)================
The algorithm that SQL Anywhere and SAP IQ software uses to search for files
such as shared libraries has been revised. Software components such as the
database server, administration utilities, and client applications using
the provided software APIs are affected by this change.
The new revised file search algorithm proceeds in the order outlined below:
1. If an absolute path is specified, then the path is verified, and the
search does not continue. On Windows, drive relative paths are not permitted
(for example, C:file.txt or \file.txt) and the component requesting the search
may fail the file reference.
2. On Windows only, the current module's directory is searched. This is
the directory where the currently executing program executable file or library
file (DLL) is located. For libraries, this is usually the Bin64 or Bin32
folder.
3. The current executable's directory is searched. This is the directory
where the currently executing program executable file is located. It may
be the same directory as in 2 above.
4. The SQL Anywhere / SAP IQ installation path, specified by the SQLANY17
/ IQDIR16 environment variable, and certain subdirectories like Bin64, Bin32,
and Java are searched. The subdirectories searched depend on the file type.
For example, JAR files are searched in the Java subdirectory only. For most
software deployments, it is required that SQLANY17 (for SQL Anywhere) or
IQDIR16 (for SAP IQ) be defined in the environment to permit the software
to run successfully.
5. On UNIX/Linux systems, shared objects are searched using the library
path environment variable:
a. LD_LIBRARY_PATH on Linux and Solaris
b. LD_LIBRARY_PATH and SHLIB_PATH on HP-UX
c. LIBPATH on IBM AIX
d. DYLD_LIBRARY_PATH on MacOS
6. For some types of files (non-binaries), the location specified by the
file path is searched. This search may be relative to the current directory
of the requesting software. Typically, this search is used for user-specified
file paths.
7. For some types of files (non-binaries) on UNIX/Linux systems, a product-specific
subdirectory of the user’s home directory is searched. This subdirectory
is $HOME/.sqlanywhere17 for SQL Anywhere and $HOME/.sqlanywhere16 for SAP
IQ.
8. For some types of files (non-binaries) on Windows, a product-specific
subdirectory in the AppData folder (%APPDATA%) is searched. This subdirectory
is “SQL Anywhere 17” for SQL Anywhere and “SQL Anywhere 16” for SAP IQ.
9. For some types of files (non-binaries) on Windows, a product-specific
subdirectory directory in the Common AppData folder (%ALLUSERSPROFILE%) is
searched. This subdirectory is “SQL Anywhere 17” for SQL Anywhere and “SQL
Anywhere 16” for SAP IQ.
10. The folders specified by the PATH environment variable are searched
last. Shared objects on UNIX/Linux systems are not searched using the PATH.
You should exercise care when choosing what directories are listed in the
PATH since they could be accessed by the software and applications.
The software administration tools like Interactive SQL require that the
SQLANY17 / IQDIR16 environment variable be defined and point to the root
of the Java folder. For example, if the software product’s Java folder is
located at C:\SQLA17\Java then the root is C:\SQLA17.
The significant change to the search algorithm is that many fewer subdirectories
are searched. In the previous version of the software, many combinations
of folder names (bin64, java, scripts, etc.) and paths including parent (.\folder-name)
sibling (..\folder-name), and child (path\folder-name) directories were attempted.
Now, only known directories for known components are examined.
On Windows, the software no longer searches the directory specified by the
product’s Location registry entry. The “Windows” directories including System32
are no longer searched unless included in the PATH.
Once the software update is applied, you should verify the correct operation
of your applications. This is especially important for deployments that do
not follow the software’s default installation directory structure.
================(Build #5746 - Engineering Case #818535)================
A new TLS / HTTPS option has been added to enable and disable older versions
of TLS if necessary. To change the minimum version of TLS that will be used,
you can use min_tls_version=<ver> in the -ec or -xs switches or as
part of the ENCRYPTION=tls connection parameter. For example, to allow TLSv1
connections over HTTPS, use "-xs https(min_tls_version=1.0;<other
parameters>)". Valid versions are "1.0" (the default),
"1.1", or "1.2". The dot is optional so you can also
use 10, 11, or 12.
Versions older than TLSv1 (eg. SSLv2, SSLv3) are disabled and cannot be
enabled.
================(Build #5728 - Engineering Case #818555)================
SQL Anywhere no longer provides 32-bit binaries or libraries for macOS. Use
the 64-bit versions instead.
================(Build #2010 - Engineering Case #797189)================
If the server is started with either TLS or HTTPS and a certificate that
(a) uses the SHA-1 hashing algorithm and (b) expires in 2017 or later, a
warning is now displayed on the console. The warning states that a SHA-1
certificate is being used, and that the certificate should be upgraded to
SHA-2.
================(Build #1384 - Engineering Case #794651)================
SQL Anywhere spatial support now permits geometries to be created outside
of the SRS bounds. Geometries were previously allowed to exceed SRS bounds
by 50% in each direction, but beyond that SQLE_SLERR_OBJECT_OUT_OF_SRS_BOUNDS
(-1484) would have been raised. The same error would also have been raised
on round-earth SRS if the geometry points exceeded the maximum allowable
range (lon -360 to 360; lat –180 to 180). These geometries can now be created
provided that all points can be represented in the SRS coordinate system
and the geometry remains trivially valid.
In case of a round-earth SRS, the coordinates will be wrapped so that they
fall within the allowable range. If latitude crosses a pole, it is adjusted
relative the the pole that it crosses and the longitude is adjusted by 180
degrees to compensate. Longitude is simply adjusted by 360 degrees until
it falls in the allowable range. The resulting geometry is then checked
against the specific SRS bounds.
When a geometry is created that exceeds the SRS bounds, it is flagged as
such. If that geometry is used in an index, it is treated as though it consumes
the entire SRS, effectively causing a linear scan over the out-of-bounds
geometries. Other internal indexing is also disabled.
Lack of index support will cause queries over tables that include out-of-bounds
geometries to be slow, but they should work as expected.
In order to detect geometries that are not indexable, a new predicate ST_IsIndexable()
has been added. A geometry is indexable if it is trivially valid and fits
within the expanded bounds of the SRS. For example:
select new ST_Point( 0, 0, 1000004326 ).ST_IsIndexable() — returns 1 because
it is within the SRS bounds
select new ST_Point( 210, 0, 1000004326 ).ST_IsIndexable() — returns 1 because
it is within the expanded SRS bounds (bounds + 50%)
select new ST_Point( 361, 0, 1000004326 ).ST_IsIndexable() — returns 0 because
it is outside of the expanded SRS bounds
For a round-earth SRS with standard boundaries (lon –180 to 180; lat –90
to 90), ST_IsIndexable() will always return 1.
================(Build #1384 - Engineering Case #794347)================
The server could have failed an assertion or fail to create some valid round-earth
geometries. This has been fixed.
================(Build #1382 - Engineering Case #794129)================
The server performs performance rewrites on ISNULL and COALESCE function
argument lists based on the nullablility of the arguments. More rewrites
can now also be done if the argument contains a referencing old or new column
of a trigger.
================(Build #1455 - Engineering Case #782143)================
In previous versions, SQL Anywhere had two licensing models: per seat licensing
and processor licensing. In SQL Anywhere 17, processor-based licensing is
replaced by core-based licensing.
The licensing utility, dblic.exe, has been updated to recognize a new license
type (core) and has removed (processor) from the list of valid license types:
SQL Anywhere Server Licensing Utility Version 17.0.0.1000
Usage: dblic [options] license_file ["user name" "company
name"]
@<data> expands <data> from environment variable
<data> or file <data>
Options (use specified case, as shown):
-l <type> license type: perseat or core
-k <key> registration key
-o <file> append output messages to file
-q quiet: do not display messages
-u <n> number of users or processors for license
================(Build #1215 - Engineering Case #786658)================
Starting in version 16, the Interactive SQL utility displayed a warning on
shutdown (or disconnect) if there were uncommitted database changes, and
the option to commit on exit was not enabled. The window that contains that
warning now has a checkbox which allows for suppressing the warning. It can
also be disabled, or re-enabled, by going to the Options dialog and the SQL
Anywhere -> Execution tab.
================(Build #6036 - Engineering Case #821131)================
UltraLite now supports Mac Catalyst. (This is a 64-bit iOS application running
on macOS.)
The UltraLite runtime libraries are now provided as an XCFramework bundle,
in addition to the existing and unchanged fat library (libulrt.a).
The XCFramework includes libraries for: 64-bit simulator, arm64 device,
Mac Catalyst, and macOS. Adding the framework to your project is now all
that's required to use UltraLite — you no longer need to manually specify
the include directory and library in the build settings. Ensure that the
setting for the framework is "Do Not Embed". The location of the
framework in the install is <install>/ultralite/iphone/ulrt.xcframework.
The Mac Catalyst library is only included in the XCFramework.
================(Build #1189 - Engineering Case #786041)================
UltraLite is now supported for 32-bit Linux. Users should take note of the
following special instructions for installation. When installing for the
first time, or overwriting a current installation, then the option "1.
Create a new installation" should be selected and then the components
desired to be installed can be selected. Note that 32-bit UltraLite will
be available for install. If, instead, the user wishes to upgrade an existing
install, then the setup program must be run twice. The first time, to install
the new feature, 32-bit UltraLite, choose the menu item "2. Modify
an existing installation". Then run the setup again, to update all of
the rest of the files, choose the menu item "3. Upgrade an existing
installation".
================(Build #5743 - Engineering Case #818553)================
UltraLite’s support for modern Windows development (Windows Runtime/Windows
Store) is updated to Universal Windows Platform on Windows 10 and Windows
10 Mobile using Visual Studio 2017. The supported processor architectures
remain x86, x64, and ARM. The UltraLite API itself remains unchanged.
The installation's UltraLite\WinRT folder has been replaced with an analogous
UltraLite\UWP folder.
Note that the classic Win32 libraries and .NET component remain unchanged.
================(Build #2084 - Engineering Case #799599)================
When using the MobiLink Plugin with an IQ consolidated database, an exception
would have occurred when all of the following were true:
- The consolidated database was IQ
- The Table Mappings editor was opened for a synchronization model and
a table was selected that did not support triggers.
- On the Download Strategy page of the editor, the option “Store timestamp
column in a shadow table” was selected.
The “Store timestamp column in a shadow table” option is now disabled for
IQ databases when a table is selected that does not support triggers. This
behavior is consistent with what existed prior to version 17.
================(Build #1164 - Engineering Case #784657)================
In the Create User Wizard dialog, if there was no authentication defined,
the LDAP authentication policy drop down would be empty but it would still
be possible to select “This user authenticates using an LDAT server”. Doing
this caused an NPE when the wizard completed.
Now the LDAP related controls are not shown on the page unless there is
an authentication policy defined.
================(Build #1163 - Engineering Case #784577)================
Right clicking on a MobiLink Server command line and choosing Copy would
have caused a null pointer exception. This has been fixed.
================(Build #1420 - Engineering Case #795641)================
On Linux systems, opening help did not work if the machine used a network
proxy. This has been fixed.
================(Build #4913 - Engineering Case #818316)================
MobiLink server now supports synchronization to Microsoft SQL Server 2017
consolidated databases.
================(Build #5825 - Engineering Case #819473)================
If a farm of Relay Servers had been created and one of the Relay Servers
was down, it was possible for the availability of the backend servers to
be down temporarily. This problem was more likely to occur on a mostly idle
Relay Server, and was less likely on a busier system. The problem has now
been fixed.
================(Build #4946 - Engineering Case #818357)================
If all the following were true :
1. A backend farm had been created with multiple backend servers
2. An HTTP request arrived with a Relay Server header indicating it should
go to backend server "X"
3. All the junctions for the Outbound Enabler connected to backend server
"X" were in use
then the Relay Server would send the HTTP request to a different backend
server in the same farm. Depending on the backend server, this may or may
not have caused problems. If the backend server was a MobiLink Server, this
would likely have generated an error indicating that it was unable to continue
an unknown HTTP session.
The Relay Server will now attempt to wait for the backend server's max_junction_idle_sec
time until passing the HTTP request to a different backend server in the
same farm, and if the Relay Server does pass the request to a different backend
server, it will post a warning to the Relay Server log to indicate this has
happened. As a workaround to this problem, the values of the -jsl and -jsh
values on the Outbound Enablers could be increased, thus decreasing the chance
that all the junctions for a given backend server were in use.
================(Build #2851 - Engineering Case #806599)================
The user specifies an amount of memory for the Relay Server to use with the
shared_mem option in the Relay Server configuration file, but this value
is modified to account for the number of backend servers. If the newly calculated
value exceeded 4GB the rshost process would crash on shutdown and could also
have crashed during normal operation if the process required more than 4GB
bytes of memory. This has been fixed.
================(Build #2019 - Engineering Case #797284)================
The Apache Relay Server could have crashed while it was shutting down. This
has been fix.
================(Build #1432 - Engineering Case #795977)================
If the SQLANY17 environment variable had been set to a path that included
spaces, then the iis7_plus_setup.bat file in %SQLANY17%\RelayServer\IIS directory
would have created directories in the wrong locations, causing the setup
to fail. This has now been fixed
================(Build #1258 - Engineering Case #789326)================
The Relay Server for Apache did not relay all duplicate HTTP response headers
received from the backend server that had the same header name, regardless
of value. Only the last duplicate header that was read was sent back to the
client. This has been fixed.
================(Build #5992 - Engineering Case #820435)================
If the SYNCHRONIZE command was used on a database that was initialized to
enable strongly encrypted tables, the SYNCHRONIZE command would fail even
if the KEY clause was used, reporting the error "Missing database encryption
key for database '???'" in the sp_get_last_synchronize_result output.
A possible work around to this issue would be to start the dbmlsync process
manually in server mode before calling the SYNCHRONIZE command, and include
the encryption key to the database on the dbmlsync command line. For example
"dbmlsync -c uid=dba;pwd=pwd -sm -po 4433 -ek key". This has been
fixed.
================(Build #5946 - Engineering Case #820190)================
If the SYNCHRONIZE command or the dbmlsync API was used to perform a
synchronization and the sp_hook_dbmlsync_process_exit_code existed, it
was possible for the synchronization to have reported errors after the
synchronization had succeeded indicating that dbmlsync was not connected
to the database. This has been fixed.
================(Build #5908 - Engineering Case #819910)================
Queries that are executed by the version 17 SQL Anywhere Database C API (DBCAPI)
do not perform as well as they did in version 16. This also pertains to performance
of queries SQL Anywhere drivers like Perl DBI, PHP, Python, Ruby, etc. that
are based on DBCAPI.
Usually, two DESCRIBES are executed for every query that is executed, thereby
affecting performance. This did not occur in version 16. This has been fixed.
Performance of the SQL Anywhere C API has been restored to version 16 levels.
================(Build #5750 - Engineering Case #818488)================
If a remote database included a table in a publication whose table_id in
the SYSTABLE table was greater than 65536, it was possible for dbmlsync to
have crashed while scanning an operation in the transaction log for that
table. This has been fixed.
================(Build #4917 - Engineering Case #817647)================
In very rare circumstances, it was possible for a rolled back transaction
to have been sent by dbremote or dbmlsync if the rollback had happened at
exactly the same time that the process finished scanning the transaction
log. This problem would only occur in version 17.0.8 with build 4146 or
higher, or version 17.0.9 with build 4783 or higher. This has now been fixed.
================(Build #4879 - Engineering Case #816882)================
If you had executed the SYNCHRONIZE START command to pre-start the dbmlsync
process running in server mode, if you performed a schema change on the remote
database that affects any of the database objects involved in synchronization,
it was possible for subsequent SYNCHRONIZE commands to have failed until
a SYNCHRONIZE STOP was executed. This has been fixed.
================(Build #4879 - Engineering Case #816881)================
The dbmlsync or dbremote process would have a delay and could block engine
processing during startup. This has been fixed.
================(Build #4146 - Engineering Case #813618)================
In very rare circumstances, it was possible for a transaction to have been
missed by dbmlsync if dbmlsync had been using transactional uploads and the
commit for this transaction had happened at exactly the same time that the
process finished scanning the transaction log. This has been fixed.
================(Build #4038 - Engineering Case #810464)================
If dbmlsync connected to the database using integrated logins, it was possible
for the dbmlsync process to have crashed. This problem has been fixed
================(Build #3978 - Engineering Case #808771)================
The SQL Anywhere MobiLink server now supports consolidated databases running
on SAP HANA servers:
1) Setup file:
The setup script file that will create system objects necessary for the
MobiLink server is named synchana.sql. This file can be found in the directory,
MobiLink\setup under the SQL Anywhere installation. This file can be executed
against a HANA database using the HANA database interactive terminal (hdbsql)
or SAP HANA Studio;
2) ODBC driver:
The recommended ODBC driver for HANA is from HANA 2.0 SP0 or up;
3) SQL Script execution timeout:
Now, the -tc and -tf options would work properly, if both the HANA server
and client versions are greater than or equal to HANA 2 SP0.
================(Build #2112 - Engineering Case #800505)================
If the MobiLink client (dbmlsync) had been configured to show upload/download
row values in the dbmlsync log, it was possible for dbmlsync to have crashed.
This has now been fixed. A workaround is to reduce the verbosity of the dbmlsync
log.
================(Build #2091 - Engineering Case #800025)================
The MobiLink client utility (dbmlsync) prints the communication parameters
used to connect to the MobiLink Server, and this string could have contained
password in the identity_password, http_password or http_proxy_password parameters.
When dbmlsync printed the synchronization profile options, the MobiLink password
would also have been printed, even if "-vp" was not specified.
These issues have now been fixed.
================(Build #1414 - Engineering Case #790558)================
If a SQL Remote or Dbmlsync hook procedure had been owned by dbo, they would
not have been found by the log scanning tool, and thus would not have been
called during replication or synchronization. This has now been fixed.
================(Build #1341 - Engineering Case #792594)================
If the SQL Anywhere MobiLink Client had to scan a large number of blobs from
the transaction log, it could have been slow. The performance of the log
scanning code when scanning blobs has been improved, although the benefits
of this change are highly dependent on the available memory and processor
power of the machine, as well as the blobs themselves.
================(Build #6290 - Engineering Case #824428)================
If a soft shutdown request had been issued to an ML Server, the server would
not have shut down if any process had opened a socket on the ML Server port.
This could cause the soft shutdown of the ML Server to take hours if the
ML Server was exposed to the Internet. This has been fixed, and a connection
to the ML Server will only prevent a soft shutdown if the connection had
identified itself as a MobiLink client.
================(Build #6128 - Engineering Case #821522)================
HTTPS synchronization through the relay server could fail when using HTTP
1.0 with an error from the web server that the SNI name doesn’t match the
Host HTTP header. This has been fixed.
================(Build #2785 - Engineering Case #804697)================
A TLS handshake could have fail if the case of the hostname specified in
the server’s certificate did not exactly match the case of the hostname provided
to the client. The problem can be worked around by either changing the case
of the hostname provided to the client to match the case in the server’s
certificate, or setting the skip_certificate_name_check option to true. Hostname
checking is now case insensitive. This has been fixed.
================(Build #2129 - Engineering Case #800928)================
There was a potential security vulnerability with MobiLink clients and the
Relay Server Outbound Enabler when synchronizing through HTTP proxies. This
has been fixed.
================(Build #2129 - Engineering Case #800927)================
Sync performance with HTTP and HTTPS could have been slow in some circumstances.
This has now been fixed.
================(Build #1331 - Engineering Case #792439)================
If an HTTP server or other intermediary converted an HTTP response to be
chunked-encoded, the synchronization would have failed. This has been fixed.
================(Build #1237 - Engineering Case #788219)================
It was possible that when a synchronization with HTTP or HTTPS failed, a
duplicate HTTP request could have been sent to the server. This would most
likely have lead to a sync failure, but there was a small chance that this
could cause data corruption. This has now been fixed.
================(Build #1157 - Engineering Case #784330)================
If HTTP or HTTPS was being used for synchronization, and a new MobiLink synchronization
request was sent to a socket on which a different synchronization had already
taken place or on which a synchronization was currently active, the MobiLink
Server could have reported an error indicating the ml-session-id had changed
or could have disconnected the active synchronization. This has now been
fixed and the MobiLink Server allows for new HTTP synchronizations to arrive
on the same socket as a previous or active synchronization.
================(Build #6170 - Engineering Case #822062)================
The MobiLink server could crash. In order to crash the -ds switch had to
have non-zero value. This has been fixed.
================(Build #6090 - Engineering Case #821239)================
MobiLink clients could hang when making restartable download requests. This
was more likely to happen if -ds was set very low. This has been fixed.
================(Build #5922 - Engineering Case #820036)================
Synchronization through a server that served multiple hostnames, each with
their own server certificate, could fail with a handshake error because the
server sent back a certificate with a name that didn’t match the name of
the host it was trying to connect to. Now, MobiLink clients will always send
up the Server Name Indication (SNI) TLS extension with the value of the hostname
it is connecting to so the server will know which certificate to send back.
This has been fixed.
================(Build #5905 - Engineering Case #819932)================
If the ML Server was connected to an Oracle or HANA consolidated database
and was synchronizing GUID values, in very rare cases, it was possible for
the MobiLink Server to have reported a protocol error during a synchronization.
This issue has now been fixed.
================(Build #5901 - Engineering Case #819920)================
If a large number of small incremental or transactional uploads were sent
to the MobiLink Server, the MobiLink Server would consume significantly more
memory than was needed. The MobiLink Server is now less aggressive when
initially allocating memory for an upload.
================(Build #5865 - Engineering Case #819671)================
If an HTTP client that wasn't a MobiLink Client had made a request to the
MobiLink server and did not include a "User-Agent" HTTP header,
then the response from the MobiLink server (a 404 Not Found error) would
have been unnecessarily delayed by two minutes. This has now been fixed.
================(Build #5814 - Engineering Case #819382)================
The MobiLink server could crash. This has been fixed.
================(Build #5744 - Engineering Case #818503)================
It was possible for the MobiLink Server to have delayed decrementing the
session count in the MobiLink Server until the operating system liveness
timed out the connection. Customers using the -sm switch on the MobiLink
could receive 10101 warnings (Synchronization request from client 'remote_id'
was rejected) even if there were no active synchronizations. This has been
fixed, and a "SESSION_COUNT" periodic performance value has been
added to the ML Server to help diagnose these issues going forward.
================(Build #4907 - Engineering Case #817457)================
If the MobiLink Server had been generating the download stream and a hard
shutdown of the Mobilink Server was requested, the download would be aborted,
but would not be rolled back. The COMMIT of the end_synchronization transaction
would then incorrectly COMMIT any changes that had been made in the download
transaction. This has now been fixed, and the download transaction is rolled
back when a hard shutdown is requested.
================(Build #4906 - Engineering Case #817456)================
If the MobiLink Server had rejected a number of non-persistent HTTP or HTTPS
synchronizations because the maximum number of concurrent active synchronizations
allowed exceeded the value specified by the -sm switch, it was possible for
the rejected synchronization to have remained active in the MobiLink Server,
but in a state where the synchronization could not proceed. This could eventually
lead to a situation where all active synchronizations allowed in the MobiLink
Server would be active, but rejected and unable to proceed, preventing the
MobiLink Server from accepting new incoming synchronizations. This has now
been fixed.
================(Build #4885 - Engineering Case #816994)================
If user defined .NET code had been executing in the MobiLink Server to populate
the download_cursor or download_delete_cursor and the value being bound to
a particular column had been out of range for the data type, an unhelpful
error message would have been printed to the MobiLink log similar to "[-10225]
User exception: Parameter 1 7 is out of range for conversion: SystemException".
The error message has been improved and now reads similar to "[-10225]
User exception: Parameter for column #2 is out of range for conversion to
data type integer: SystemException".
================(Build #4869 - Engineering Case #816600)================
Several server-side issues with restartable downloads were fixed:
1) There was an undocumented limit of 200 stored downloads; this has been
removed
2) If a download failed to be generated, a restart request for that download
would fail with error -10255, "Unable to start the restartable synchronization".
The remote will now get the error from the failed download
3) If the server received a restart request, but the download hadn’t yet
been generated and the download was larger than the -ds size, the restart
request would not be given the download and would instead be hung forever.
It will now receive the download.
================(Build #4836 - Engineering Case #815883)================
Additional logging was added to the -vp switch. Changes were made to the
undocumented _log_all=1 stream option. Some output that was printed at level
one is now printed at level2, and there is additional logging at level 1.
================(Build #4821 - Engineering Case #815392)================
The MobiLink Server could crash if -wn was greater than 1 and restartable
downloads could be kept longer than they should, or could be kept not long
enough. This has been fixed.
================(Build #4810 - Engineering Case #814979)================
The MobiLink server could crash when using HTTP. This has been fixed.
================(Build #4797 - Engineering Case #814465)================
Additional status check diagnostic logging has been added to the -vp ML server
log output.
================(Build #4084 - Engineering Case #811516)================
In very rare circumstances, the MobiLink Server could have crashed on shutdown
if Java or .NET synchronization scripts had been used. This problem has
been fixed.
================(Build #4053 - Engineering Case #810789)================
MobiLink server issued and ignored an error on startup when:
- The consolidated database is Microsoft SQL Server or Azure.
- The consolidated database has a case-sensitive collation.
The error was:
E. 2017-09-13 09:59:18. <Main> [-10002] Consolidated database server
or ODBC error: ODBC: [Microsoft][SQL Server Native Client 11.0][SQL Server]Invalid
object name 'SYSOBJECTS'. (ODBC State = 42S02, Native error code = 208)
This has been fixed.
================(Build #4038 - Engineering Case #810596)================
The MobiLink server could hang. This has been fixed.
================(Build #4026 - Engineering Case #810246)================
The MobiLink server could crash with -wn greater than 1. This has been fixed.
================(Build #4024 - Engineering Case #810171)================
Previously MobiLink did not officially support SQL Server 2016. This has
been fixed. MobiLink customers can approach SQL Server 2016 just as they
do for SQL Server 2014.
SQL Server 2016 introduces temporal tables that automatically track the
times of all inserts, updates and deletes. Temporal tables can make writing
MobiLink download scripts simpler, because triggers and/or shadow tables
are no longer required. Consider the following example.
Temporal Table Example
Consider the following temporal table. This CREATE TABLE statement creates
both the MyTable table and the MyTableHistory table.:
CREATE TABLE MyTable
(
pk integer primary key not null,
c1 integer,
c2 integer,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH
(
SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.MyTableHistory)
)
A straightforward download_cursor for this table that makes use of the temporal
columns is:
-- Create the download_cursor script for MyTable.
-- Current rows are the rows with a SysEndTime of '9999-12-31 23:59:59.9999999'.
-- Only select the current rows that have changed since the last download.
-- NOTE: the temporal columns are UTC, but by default MobiLink uses
local
time when calculating the s.last_table_download value.
exec ml_add_table_script ‘v1’, ‘MyTable’, 'download_cursor',
‘SELECT pk, c1, c2 FROM MyTable FOR SYSTEM_TIME ALL
WHERE SysStartTime >= todatetimeoffset( {ml s.last_table_download},
datepart( TZoffset, sysdatetimeoffset() ) ) AND
SysEndTime = ''9999-12-31 23:59:59.9999999''’
A straightforward download_delete_cursor for this table that makes use of
the temporal columns is:
-- Create the download_delete_cursor script for MyTable.
-- A deleted row has a maximum SysEndTime less than
'9999-12-31 23:59:59.9999999'.
-- Only select the deleted rows that have been deleted since the last
download.
-- NOTE: the temporal columns are UTC, but by default MobiLink uses
local time
when calculating the s.last_table_download value.
exec ml_add_table_script ‘v1’, ‘MyTable’, 'download_delete_cursor',
‘SELECT d1.pk FROM ‘MyTable’ FOR SYSTEM_TIME ALL d1
WHERE d1.SysEndTime >= todatetimeoffset( {ml s.last_table_download},
datepart( TZoffset, sysdatetimeoffset() ) ) AND
d1.SysEndTime = ( SELECT MAX( d2.SysEndTime )
FROM ‘MyTable’ FOR SYSTEM_TIME ALL d2
WHERE d1.pk = d2.pk ) AND
d1.SysEndTime < ''9999-12-31 23:59:59.9999999''’
The key part to both of the scripts above is the “FOR SYSTEM_TIME ALL” clause,
which performs an internal join of the MyTable and MyTableHistory tables
to consider both the current (MyTable) and old (MyTableHistory) row values.
================(Build #3966 - Engineering Case #810171)================
Previously MobiLink did not officially support SQL Server 2016. This has
been fixed. MobiLink customers can approach SQL Server 2016 just as they
do for SQL Server 2014.
SQL Server 2016 introduces temporal tables that automatically track the
times of all inserts, updates and deletes. Temporal tables can make writing
MobiLink download scripts simpler, because triggers and/or shadow tables
are no longer required. Consider the following example.
Temporal Table Example
Consider the following temporal table. This CREATE TABLE statement creates
both the MyTable table and the MyTableHistory table.:
CREATE TABLE MyTable
(
pk integer primary key not null,
c1 integer,
c2 integer,
SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL,
SysEndTime datetime2 GENERATED ALWAYS AS ROW END NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH
(
SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.MyTableHistory)
)
A straightforward download_cursor for this table that makes use of the temporal
columns is:
-- Create the download_cursor script for MyTable.
-- Current rows are the rows with a SysEndTime of '9999-12-31 23:59:59.9999999'.
-- Only select the current rows that have changed since the last download.
-- NOTE: the temporal columns are UTC, but by default MobiLink uses
local
time when calculating the s.last_table_download value.
exec ml_add_table_script ‘v1’, ‘MyTable’, 'download_cursor',
‘SELECT pk, c1, c2 FROM MyTable FOR SYSTEM_TIME ALL
WHERE SysStartTime >= todatetimeoffset( {ml s.last_table_download},
datepart( TZoffset, sysdatetimeoffset() ) ) AND
SysEndTime = ''9999-12-31 23:59:59.9999999''’
A straightforward download_delete_cursor for this table that makes use of
the temporal columns is:
-- Create the download_delete_cursor script for MyTable.
-- A deleted row has a maximum SysEndTime less than
'9999-12-31 23:59:59.9999999'.
-- Only select the deleted rows that have been deleted since the last
download.
-- NOTE: the temporal columns are UTC, but by default MobiLink uses
local time
when calculating the s.last_table_download value.
exec ml_add_table_script ‘v1’, ‘MyTable’, 'download_delete_cursor',
‘SELECT d1.pk FROM ‘MyTable’ FOR SYSTEM_TIME ALL d1
WHERE d1.SysEndTime >= todatetimeoffset( {ml s.last_table_download},
datepart( TZoffset, sysdatetimeoffset() ) ) AND
d1.SysEndTime = ( SELECT MAX( d2.SysEndTime )
FROM ‘MyTable’ FOR SYSTEM_TIME ALL d2
WHERE d1.pk = d2.pk ) AND
d1.SysEndTime < ''9999-12-31 23:59:59.9999999''’
The key part to both of the scripts above is the “FOR SYSTEM_TIME ALL” clause,
which performs an internal join of the MyTable and MyTableHistory tables
to consider both the current (MyTable) and old (MyTableHistory) row values.
================(Build #3455 - Engineering Case #808460)================
The MobiLink server was doing an unnecessary network flush during restartable
downloads. This has been fixed.
================(Build #3399 - Engineering Case #807001)================
The MobiLink server now gives a stricter set of HTTP cache control headers.
This should prevent more HTTP intermediaries from caching MobiLink HTTP requests.
================(Build #2131 - Engineering Case #801033)================
Requests could have failed with an internal stream error when using http.
This has been fixed.
================(Build #2100 - Engineering Case #800185)================
If a version 12 lightweight poller had attempted to connect to a version
17 MobiLink Server, the MobiLink Server would have reported a protocol error,
and the version 12 lightweight poller would have failed to connect to the
MobiLink Server. This problem affected dblsn and applications that had been
built using the lightweight polling API and has now been fixed.
================(Build #1451 - Engineering Case #796694)================
The MobiLink server could have crashed when using restartable downloads with
the –wn option set to be greater than 1. This has been fixed.
================(Build #1428 - Engineering Case #796136)================
The MobiLink server could have crashed when using HTTPS with –wn set to be
greater than 1.
================(Build #1411 - Engineering Case #795422)================
Clients could have crashed the MobiLink server after successfully authenticating.
This has been fixed.
================(Build #1397 - Engineering Case #795574)================
A number of problems with restartable downloads have been fixed:
- The sync server could have crashed
- The sync server could have reported an error instead of waiting if the
sync being restarted had not yet finished
- Download restarts were unnecessarily slow
- If a remote sent more than one restart request for its download, the
last one sent would sometimes fail because the server processed the last
one received, which may have been different from the last one sent
- It was possible to store more restartable download data than specified
with the –ds switch
- Failed, restartable syncs waiting for a resumption request would have
appeared stuck in the sending download phase of the MobiLink Profiler
================(Build #1396 - Engineering Case #794717)================
There were a number of problems with restartable downloads:
- the MobiLink server could have crashed
- the MobiLink server could have reported an error instead of waiting,
if the sync being resumed hadn’t yet finished
- download resumption was unnecessarily slow
- if a remote sent more than one restart request for its download, the
last one sent would sometimes fail because the server processed the last
one received, which may have been different from the last one sent
- it was possible to store more resumable download data than specified
with the –ds switch
- failed, resumable syncs waiting for a resumption request would appear
stuck in the sending download phase of the MobiLink Profiler
These issues have now been fixed.
================(Build #1356 - Engineering Case #793877)================
The MobiLink server would only have shown the major and minor parts of a
version string in a trace file and suppress the patch level of the client.
This problem is fixed. Now the MobiLink server will show the full client
version string including the major, minor, and patch strings as well as the
build number in its trace file.
================(Build #1343 - Engineering Case #792866)================
The MobiLink server could have crashed. This has been fixed.
================(Build #1334 - Engineering Case #792597)================
A file I/O error during a file transfer upload could have been reported as
protocol error 400. This has been fixed.
================(Build #1268 - Engineering Case #789932)================
UltraLite clients could get into a state where every sync would fail with
error -10400: “Invalid sync sequence ID for remote ID”. This has been fixed.
================(Build #1243 - Engineering Case #788454)================
The MobiLink server would have generated the error: “[-10013] Version ‘…’
not found in the ml_script_version_table. Cannot synchronize”, if a client
requested a sync with a script version that was not implemented in the consolidated
database. Implementing the script version into the consolidated database
and syncing again, the MobiLink server would still complain with the same
error. This problem is now fixed.
The workaround is to restart the MobiLink server.
================(Build #1231 - Engineering Case #787658)================
The MobiLink server could have crashed when using HTTP. This has now been
fixed.
================(Build #1180 - Engineering Case #785534)================
If an attempt was made to get a bit, tinyint or decimal data type from an
IDataReader from the UploadData object, a System.InvalidCastExecption error
would have been thrown. This has now been fixed.
================(Build #1180 - Engineering Case #785533)================
It was possible that when attempting to get a GUID data type from a DBRowReader,
a System.FormatExecption exception could have been thrown, even though there
was no issue with the format of the GUID. This issue has now been fixed.
================(Build #1180 - Engineering Case #785453)================
There were two problems when gathering integer values from a DBRowReader
from the MobiLink .NET API.
- If an attempt was made to get an unsigned smallint, integer or bigint
from a DBRowReader, a System.OverflowException would have been thrown if
the value was greater than the maximum value for the signed version of the
data type.
- If an attempt was made to get a tinyint from a DBRowReader, a System.InvalidCastExpection
would have been thrown.
Both these issues have been fixed.
================(Build #1180 - Engineering Case #751840)================
If the machine where the MobiLink Server was running had a localized setting
such that the decimal separator was not a period (for example, a comma),
there were a number of problems when the MobiLink .NET API was used to synchronize
data.
- Attempting to get a decimal data type from an IDataReader from the UploadData
object could have resulted in a System.FormatExecption error
- Attempting to get a real, double or decimal data type from a DBRowReader
could have resulted in a System.FormatExecption error
- Attempting to use a real or double data type in a DBParameter added to
a DBCommand could have resulted in an error indicating that the value could
not be converted to a real or double
These problems have now been fixed.
================(Build #1360 - Engineering Case #793794)================
In some circumstances, retrieving a query result set from an Oracle database
through the SQLA ODBC driver could have been slow, especially for tables
with a small row width, because the ODBC driver fetched only 20 rows from
the database server each time. In order to make the fetching row size more
flexible, a new DSN configuration parameter, “Fetch array size (rows)” has
been introduced. This parameter can be set from the “Configuration for SQL
Anywhere driver for Oracle” dialog box on Windows, or using the new DSN entry,
FetchArraySize=xxx on UNIX. The default value for this new parameter is
20 and the default value will be used if the setting for this parameter is
not specified or if this parameter is set to be zero.
Increasing the “Fetch array size” reduces the number of round trips on the
network, thereby increasing performance. For example, if your application
normally fetches 100 rows, it is more efficient for the driver to fetch 100
rows at one time over the network than to fetch 20 rows at a time during
five round trips over the network. However, increasing the “Fetch array
size” will also increase the memory usage by the ODBC driver.
================(Build #1261 - Engineering Case #789321)================
The output data from stored procedure calls could have been truncated by
the SQL Anywhere ODBC driver for Oracle, if the SQL_C_WCHAR data type was
used when binding the INPUT_OUTPUT or OUTPUT parameters, and the Oracle OCI
library, version 12.1.0.2.0 was used. This problem is now fixed.
================(Build #4945 - Engineering Case #818323)================
When using the Code First approach of .NET Entity Framework, if a property
has the same name as a SQL keyword, then a syntax error occurs in the generated
CREATE TABLE statement. In the following example, "Group" clashes
with the SQL keyword "GROUP".
public class Blog
{
public Blog()
{
Posts = new List<Post>();
}
public int BlogId { get; set; }
public string Name { get; set; }
public string Group { get; set; }
public virtual List<Post> Posts { get; set; }
}
This problem has been fixed. The SQL code generator now places the identifiers
in brackets (example, [Group]).
================(Build #4840 - Engineering Case #815956)================
Under some circumstances, a row constructed from a subquery could cause server
crash when it was later assigned to another row. This has been fixed.
================(Build #4141 - Engineering Case #813360)================
Some Entity Framework 6 design-time functionality does not work when using
the SQL Anywhere version 17 .NET Data Provider.
The SQL Anywhere .NET Data Provider Entity Framework support file SSDLToSA17.tt
is not installed into the Visual Studio "Common7\IDE\Extensions\Microsoft\Entity
Framework Tools\DBGen" folder by the SetupVSPackage installer tool.
The SSDLToSA17.tt file is located in the "Assembly\V4.5" folder
and should be copied to the Visual Studio location by SetupVSPackage.
This problem has been fixed.
================(Build #4130 - Engineering Case #812885)================
Using the SQL Anywhere .NET Data Provider with Entity Framework, an error
occurs in data query expressions using TimeSpan values.
The following is an example containing a "where" clause involving
a TimeSpan and a TIME database data type.
var query = from b in db.Blogs
where System.Data.Entity.DbFunctions.CreateTime(12, 34, 56.789 ) ==
b.ts
orderby b.Name, b.BlogId
select b;
This problem has been fixed.
================(Build #4129 - Engineering Case #812887)================
Using the SQL Anywhere .NET Data Provider with Entity Framework, an error
is reported when trying to use certain canonical functions. Generally, the
error is caused because the function is not implemented. In other cases,
the implementation is incorrect.
The following functions have been corrected, added, or removed.
Round() can now take a second argument which is the number of precision
digits.
Truncate() can now take a second argument which is the number of precision
digits.
The use of Truncate() caused a syntax error and has been reimplemented so
that it generates a call to the TRUNCNUM system procedure.
Abs() has been added.
Contains(), Startswith(), and EndsWith() have been added.
Millisecond() has been added.
DayOfYear() has been added.
CurrentDateTime(), CurrentUtcDateTime(), and CurrentDateTimeOffset() have
been added.
GetTotalOffsetMinutes() has been added.
TruncateTime() has been added.
CreateDateTime(), CreateDateTimeOffset(), and CreateTime() have been added.
AddNanoseconds() and DiffNanoseconds() have been removed since they are
not supported.
The datepart keywords have been revised to those supported by the database
server.
Dateparts caldayofweek, cdw, calweekofyear, cwk, calyearofweek, cyr are
now supported.
Dateparts microsecond, mcs, us, tzoffset, tz are now supported.
Dateparts d, m, n, q, s, ww, y, yyyy have been removed as they are not supported.
These changes improve Entity Framework support. Here is an example that
uses some of these functions.
var dataset2 = query
.OrderBy(y => y.Name)
.Select(y => new
{
B_Name = y.Name
,B_ID = y.BlogId
,B_Url = y.Url
,B_Date = y.CreatedDate
,B_P1 = y.Name.StartsWith("C")
,B_P2 = y.Name.EndsWith("1")
,B_P3 = y.Name.Contains("d")
}
)
.ToList();
A work-around may be possible in some circumstances by implementing a function
stored procedure of the same name. Here is an example for CreateDateTimeOffset.
CREATE OR REPLACE FUNCTION dbo.CreateDateTimeOffset(yy int, mm int, dd int,
hh int, nn int, ss double, tzo int)
RETURNS DATETIMEOFFSET
BEGIN
RETURN TODATETIMEOFFSET(DATEADD(microsecond,ss*1000000,DATEADD
(second,3600*hh+60*nn,YMD(yy,mm,dd))),tzo);
END;
GRANT EXECUTE ON dbo.CreateDateTimeOffset to PUBLIC;
================(Build #4112 - Engineering Case #812382)================
The unmanaged code portion of the SQL Anywhere ADO.NET provider is contained
in a DLL that is unloaded by the provider into a directory and subsequently
loaded from there into memory. In some situations, this action violates system
security policies.
To accommodate this, the load procedure for the unmanaged code DLL (dbdata17.dll
or dbdata16.dll) has been changed as follows:
1. The provider looks for the dbdata DLL in .NET application's directory.
If the DLL is found, then it is loaded and a version check is done. If the
DLL version matches the ADO.NET provider version, then the application is
launched. Otherwise, the next step is performed.
2. The provider looks for the dbdata DLL in the ADO.NET provider's directory
(this directory could be different than the application directory). If the
DLL is found, then it is loaded and a version check is done. If the DLL version
matches the ADO.NET provider version, then the application is launched. Otherwise,
the next step is performed.
3. The provider looks for the dbdata DLL in the "temp" directory
as described in the documentation. It starts with the directory at index
1 (for example, {16AA8FB8-4A98-4757-B7A5-0FF22C0A6E33}_1708.x64_1). If the
DLL is found, then it is loaded and a version check is done. If the DLL version
matches the ADO.NET provider version, then the application is launched. Otherwise,
if the DLL was found but the version was wrong, an attempt is made to delete
it. If this succeeds then a new DLL is unpacked into the directory. Otherwise,
the next directory (index 2, 3, etc.) is searched repeating step 3.
See http://dcx.sap.com/index.html#sqla170/en/html/3bcf66b76c5f1014b219867750fa0899.html
for more information on how the dbdata DLL is handled.
Step 3 is very similar to the previous behavior of the ADO.NET provider,
except that the provider will load the DLL and do a version check if the
DLL is already present and then attempt to delete it if the version is wrong.
Previously the provider would attempt to delete the DLL first and, if not
successful, load it and do a version check. In most situations, this should
help improve performance.
Note that if the provider DLL is in the global assembly cache (GAC), then
no dbdata DLL will be found there. Typically, the provider DLL will be located
with the application executable. Ultimately, your application will decide
how the provider is loaded if not through the GAC. Placement of the dbdata
DLL as described in step 1 is preferable to that of step 2.
It will be the ADO.NET application developer’s responsibility to make a
copy of the dbdata DLL during the development/test phase from the "temp"
directory and embed it in one of the directories described in step 1 or 2.
The developer must ensure correct bitness (32/64 bit) and version match (for
example, 17.0.8.4103) between the provider and the dbdata DLL in order for
steps 1 or 2 to work.
================(Build #4095 - Engineering Case #811732)================
In a multithreaded ADO.NET application, communicating with a slow-to-respond
database server on one thread can impact the performance of threads that
are communicating with quick-to-respond database servers. For example, if
a database server requires 1 minute to respond to a connection request, then
all other threads are delayed by 1 minute.
The SQL Anywhere .NET Data Provider has been revised to remove this serialization.
================(Build #2786 - Engineering Case #804585)================
A pooled connection can be invalidated by the database server for a number
of reasons including user creation, user deletion, password changes, connection
timeout, etc. When the database server invalidates a pooled connection, the
SQL Anywhere .NET Data Provider will discard the pooled connection and create
a new connection. For multithreaded applications, it might have been given
the same new connection to two different threads that were opening a connection.
Eventually each thread had closed the connection, returning it to the pool,
and an assertion had been generated by the server for the second thread (since
the connection was already pooled). When this problem occurred, the database
server returned the error "Assertion failed 104909 Invalid request on
pooled connection" to the application. This problem has been fixed.
================(Build #2144 - Engineering Case #801280)================
A .NET application could have received a NullReferenceException when calling
ClearAllPools or ClearPool in a multithreaded application.
Also, a .NET application could have gone into an infinite loop if the database
server was shut down while the .NET application was executing SQL statements.
These problems have been fixed.
================(Build #2119 - Engineering Case #800698)================
Some of the SQL Anywhere .NET Data Provider Database (class) and DbProviderServices
(class) methods may have failed if the underlying Table property was null.
The methods that may have failed include Database.Exists, Database.Delete,
Database.Create, Database.CreateIfNotExists, DbDatabaseExists, DbDeleteDatabase,
DbCreateDatabase, and
DbCreateDatabaseScript. These Database methods are used in Entity Framework
applications. This problem has been fixed.
The following example code fragment illustrates the use of some of these
methods:
using (var db = new BloggingContext())
{
Console.WriteLine("Delete the old database");
db.Database.Delete();
}
using (var db = new BloggingContext())
{
Console.WriteLine("Create a new database");
db.Database.Create();
if (db.Database.Exists())
{
Console.WriteLine("The database does exist");
}
else
{
Console.WriteLine("The database does not exist");
}
}
================(Build #2117 - Engineering Case #800621)================
If a .NET connection had been pooled and was currently closed, and the database
server was terminated while the pooled connection was closed, then an attempt
to open the pooled connection would have resulted in an infinite loop. This
has now been fixed.
Note, the problem was introduced by the changes for Engineering case 793308
- "Slow performance of ADO.NET connection pooling"
================(Build #1462 - Engineering Case #797203)================
When using the SQL Anywhere .NET provider, it was possible to get an exception
when closing a pooled connection. The exception error text was “Invalid user
ID or password”.
This exception could have occurred for any condition where a connection
was not returned to a pool, for example, when a password had changed. The
complete list of conditions for which a connection is not pooled are described
at http://dcx.sap.com/index.html#sqla170/en/html/814d6d5c6ce2101482c9b5abd7938330.html.
This problem has been fixed. The .NET application will no longer see the
exception, and the connection is closed but not pooled.
================(Build #1462 - Engineering Case #797124)================
When using the SQL Anywhere .NET provider, it was possible to get a NullReferenceException
when calling ClearAllPools. This exception could have occurred in multithreaded
.NET applications that open or close pooled connections while another thread
calls ClearAllPools. This problem has been fixed. The .NET application will
no longer see the exception.
================(Build #1356 - Engineering Case #793308)================
The performance of the ADO.NET connection pool was slow compared to the .NET
ODBC bridge. Several changes have now been made to improve the performance.
================(Build #1349 - Engineering Case #793189)================
When attempting to call a stored procedure with many parameters with long
names, an error could have been returned indicating that parameters were
mismatched.
For example, when attempting to call a stored procedure with 99 very long
parameter names:
myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_2",
10);
myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_1",
5);
myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_55",
550);
myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_54",
"string");
.
.
.
myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_98",
980);
myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_97",
970);
myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_99",
990);
SADataReader myDataReader = myCommand.ExecuteReader();
the SQL Anywhere .NET provider should have matched parameter names with
actual parameter names so order should not have mattered. The provider was
not setting aside enough memory for the parameter name lookup, resulting
in matching by order rather than name.
This problem has been fixed.
================(Build #1298 - Engineering Case #791082)================
When loading and unloading the SQL Anywhere ADO.NET provider assembly via
AppDomain.CreateInstance with connection pooling enabled, the unload of the
assembly would hang in Sap.Data.SQLAnywhere.SAUnmanagedDll.Finalize(). This
has been fixed.
A workaround is to disable connection pooling on the connection string (Pooling=False).
================(Build #1271 - Engineering Case #785764)================
When using the .NET GetSchemaTable() method for a query on a table whose
name was not unique, an exception could have occurred in the provider. This
problem has been fixed.
For example, suppose the following query was executed against the table
“employees” owned by DBA, and there also exists a table “Employees” owned
by the user GROUPO.
SACommand cmd = new SACommand("SELECT * FROM DBA.employees", conn);
SADataReader reader = cmd.ExecuteReader();
DataTable schema = reader.GetSchemaTable();
An exception was raised in the GetSchemaTable call. When the tables have
the same letter case, then an exception would not have occurred but the wrong
schema information could have been returned.
================(Build #1257 - Engineering Case #787963)================
The .NET Data Provider would have generated an exception when attempting
to connect to a database server that had more than two digits in the minor
version. For example, the provider would have generated System.ArgumentOutOfRangeException
parsing the following version string:
SAP IQ/16.0.101.1215/20034/P/sp10.01/…
This problem has been fixed.
The normalized version string that is returned by the ServerVersion property
now has the following format:
##.##.###.####
^ ^ ^ ^
| | | |
| | | Build Number
| | Minor Version
| Major Version
Release Version
This new format is also used in the DataSourceInformation collection (DataSourceProductVersion
and DataSourceProductVersionNormalized).
================(Build #1218 - Engineering Case #787422)================
The changes for Engineering case 766113 caused the .NET Data Provider to
attempt to set the CHAINED option to ON when connecting to the utility database.
This resulted in the error “Permission denied: you do not have permission
to execute a statement of this type” when connecting to the utility database,
due to this option being disallowed for the utility database. This problem
has now been fixed.
================(Build #2054 - Engineering Case #798818)================
Sending a test email from the SQL Anywhere Cockpit on a non-English system
would have resulted in an email with mangled text. This has been fixed.
================(Build #2030 - Engineering Case #797513)================
In the SQL Anywhere Profiler, when the "Operations" or "Blocking"
tab was selected, clicking the Edit/Copy menu did nothing. This has been
fixed.
================(Build #2000 - Engineering Case #788771)================
The filter grammar used by the SQLA Profiler contains some keywords that
are made up of multiple words. Most of these keywords are run together without
an intervening blank (e.g. "ConsoleMessage"), but two include a
blank: "this week" and "last week".
In order to make the grammar easier to use, the Profiler now accepts "ThisWeek"
and "LastWeek" as synonyms for "This Week" and "Last
Week".
================(Build #1461 - Engineering Case #797126)================
The SQL Anywhere Profiler could have blocked statement execution in the database
to which it was connected if triggers fired. This has been fixed.
================(Build #1441 - Engineering Case #796314)================
In the SQL Anywhere Cockpit, when trying to filter the database property
list or the alerts list, the list will always clear. This has been fixed.
================(Build #1440 - Engineering Case #796269)================
The SQL Anywhere Profiler could have incorrectly reported low CPU usage if
the server process used fewer logical processors than were available on the
machine. This has been fixed.
================(Build #1351 - Engineering Case #793262)================
The "Profiling" tab of the SQL Anywhere Profiler contains two tables:
the top one lists the stored procedures that were called, while the bottom
one shows the SQL source for the selected procedure. The vertical scroll
position of the second (source) table was always reset to the top whenever
the Profiler read new stored procedure profiling data. For a busy database,
that meant that reading the source could have been very difficult because
the contents were always being scrolled to the top. This has been fixed.
================(Build #1348 - Engineering Case #793124)================
When running the SQL Anywhere Cockpit, the following message would occasionally
have been written to the database server console “Cannot convert ‘’ to timestamp”.
It was also possible for this message to have been reported in a message
box when interacting with the SQL Anywhere Cockpit in a browser. This has
been fixed.
================(Build #1343 - Engineering Case #792865)================
The SQL Anywhere Profiler has a "Statements" tab which lists the
statements that have been executed and how long they took. If a saved profiler
session file (a .sqlap file) was loaded, statements were shown executing
twice as often as they actually were. This has been fixed.
================(Build #1325 - Engineering Case #792141)================
After a 30 minute period of inactivity the SQL Anywhere Cockpit will automatically
log out the user. If the alert configuration dialog was open when this happened,
the user was logged out but the dialog remained open. This has been fixed
so that the dialog now closes.
================(Build #1321 - Engineering Case #792086)================
On the Thing Inspector for the “CPU usage is high” alert, there is a server
settings tile. Clicking on the links in the server settings tile would have
incorrectly navigated to the “Properties” page of server Thing Inspector.
Now, clicking on the links in the server settings tile opens the Server Thing
Inspector and navigates to the “Settings” page.
================(Build #1312 - Engineering Case #790676)================
The SQL Anywhere Profiler's filter syntax includes a time unit (seconds,
minutes, hours, days, or weeks). Previously, the names of the times had to
be specified in English, unlike the rest of the filter syntax which is localized.
Now, localized versions of the keywords "seconds", "minutes",
"hours", "days", and "weeks" are accepted.
The symbols for these units are NOT localized, and continue to be "s",
"m", "h", "d", and "w". "min"
can also be used to denote minutes.
================(Build #1310 - Engineering Case #791661)================
When copying numeric data from a result set to the clipboard in the Interactive
SQL utility or SQL Central, the copied value would have incorrectly included
the thousands separator when copying cells or column. This has been fixed
so that the thousands separator is not used in any copied data.
================(Build #1308 - Engineering Case #791559)================
When the "Operations" tab was selected, clicking the "Tools/Suggest
Index for Statement" menu did nothing. Now, it opens the Index Consultant
for the selected statement.
A work-around for this problem is to right-click the operation on the "Operations"
tab, and click "Suggest Index for Statement" from the context menu.
================(Build #1300 - Engineering Case #791227)================
UI5 clickjack thwart introduced in version 1.28 now implemented in the SQL
Anywhere Cockpit.
================(Build #1289 - Engineering Case #790739)================
In the SQL Anywhere Profiler, the tooltip for the "Server Load"
panel contains a timestamp which is usually formatted using whatever timestamp
format the system (OS) has been configured to use. On Japanese and Chinese
systems, the timestamp was formatted with incorrect characters. This has
been fixed.
================(Build #1284 - Engineering Case #790552)================
When running the SQL Anywhere Cockpit, in some circumstances an error similar
to the following may have been printed to the database server log:
Error while evaluating SQLA Cockpit Alerts
Cannot convert '' to timestamp
"dbconsole"."update_expr_data" at line 34
This has been fixed.
================(Build #1277 - Engineering Case #790264)================
The SQL Anywhere Profiler could have crashed immediately after applying a
filter if the "Operations" tab was selected. This issue was intermittent,
and depended on the position of the mouse on the screen at the time the filter
was applied. This problem has now been fixed.
================(Build #1276 - Engineering Case #790217)================
When starting the SQL Anywhere Profiler, a spurious error message could have
been reported saying that "You do not have 'SET ANY SYSTEM OPTION' system
privilege." This message could have been reported if the user didn't
have the SET ANY SYSTEM OPTION system privilege. Now, this error message
is not reported, even if the user does not have the system privilege.
================(Build #1252 - Engineering Case #788849)================
If the "Filter" field in the Profiler window had focus, and a large
number of operations had been collected, just moving the cursor (caret) in
the "Filter" field could have been unusually slow. This has been
fixed.
A number of other minor corrections have also been made to the "Filter"
field:
1. If the "Filter" field was empty, and you right-clicked, then
clicked "Paste", the text was incorrectly shown grey and
in
italics.
2. It was possible for the prompt text (e.g. "Filter") to remain
in the component after clicking in it.
3. Under Window's "Classic" desktop theme, the component looked
disabled, even when it wasn't.
4. The rollover effects for the button in the SearchField were
being shown only when the mouse button was down. Effects
should be shown while the mouse is over the button
regardless of the mouse button state.
5. When the MRU list was open, rolling the mouse over the list
should have selected the item under the mouse.
================(Build #1250 - Engineering Case #788767)================
The SQL Anywhere Profiler could have in rare situations crashed when trying
to opening a.SQLAP file, if that file could not be opened (because it was
on a network drive, say, and connection to the drive was subsequently lost.)
This has been fixed.
================(Build #1208 - Engineering Case #786878)================
Filter expressions in the SQL Anywhere Profiler allow for localized keywords.
The localized version of the "severity" keyword were not recognized
by the software. Now, it is. This bug prevented filtering and highlighting
from working correctly on non-English computers.
================(Build #1208 - Engineering Case #786874)================
Filter expressions in the SQL Anywhere Profiler allow for localized keywords.
The localized versions of "executionTime" and "blockedTime"
keywords were not recognized by the software. Now, they are. This bug prevented
filtering and highlighting from working correctly on non-English computers.
================(Build #1207 - Engineering Case #786805)================
The following issues in the SQL Anywhere Profiler related to filtering have
been fixed:
- On non-English computers that are configured to use 12-hour clocks, filtering
by time would always have resulted in no matching operations.
- Setting a time range in the "Server Load" panel did not work
on non-English computers.
- The "Add Filter Epression" window contains a combobox containing
names of users that have connected. Previously, if a given user had connected
more than once, the name would appear more than once in the combobox. Now,
names appear only once.
================(Build #1203 - Engineering Case #786683)================
The SQL Anywhere Profiler could have crashed when disconnecting from a database
when clicking the "Cancel" button in the status dialog that is
shown while the Profiler disconnects. This problem did not occur consistently.
This has been fixed.
================(Build #1202 - Engineering Case #786608)================
The following Index Consultant issues have been fixed in the SQL Anywhere
Profiler:
- If the workload contained too many statements to hold in memory at once,
the Workload Index Consultant could have crashed with an error message that
did not explicitly say that it had run out of memory. Now, a clear error
message is displayed, and the program does not crash.
- Statements with host variables were not being considered by the Index
Consultant. Now, they are.
================(Build #1201 - Engineering Case #788630)================
In SQL central, it was not possible to set the server’s quitting time on
the property sheet’s Options page if the timestamp_format option was set
to a non-default value (the default is YYYY-MM-DD HH:NN:SS.SSS). This has
been fixed. The property sheet now uses a free-form text field rather than
a masked text field. Also, the current time is now shown in the same format
as is required for setting the quitting time.
================(Build #1190 - Engineering Case #788633)================
In SQL Central, the Unload Database wizard could have crashed after unloading
the database. The crash was intermittent, and happened only rarely. It has
been fixed.
================(Build #1190 - Engineering Case #786123)================
When opening the SQL Anywhere Cockpit from SQL Central, if the Cockpit supported
both IPv4 and IPv6 addresses then an IPv6 address would have been used. Now
an IPv4 address is used.
================(Build #1187 - Engineering Case #785850)================
The connection cookie may not have expired for users in the SQL Anywhere
Cockpit. This has been corrected.
================(Build #1184 - Engineering Case #785748)================
The following issues surrounding the SQL Anywhere Profiler's reporting of
row locks have been corrected:
1 - When connected to a database, row locks were not consistently shown
on the "Blocking" tab's "Blocking Objects" panel.
2 - When row locks were shown on the "Blocking Objects"panel,
they appeared as "Table" locks
3 - When a table lock was shown in the top table of the "Blocking Objects"
panel, the details (lower) panel could have incorrectly contained row locks
for database tables other than the one selected in the upper table.
================(Build #1183 - Engineering Case #785694)================
In the SQL Anywhere Profiler, there is a "File/Clear" menu that
discards the profiling data collected so far. If the profiling session was
subsequently saved to a file, the file would have included profiling data
collected before clicking "File/Clear". This has been fixed so
that the data collected before clearing is no longer saved to the file.
================(Build #1182 - Engineering Case #785758)================
The SQL Anywhere Cockpit was vulnerable to clickjacking. This has been fixed.
================(Build #1177 - Engineering Case #785393)================
It was possible for the Profiler to list a non-existent statement on the
"Operations" tab when connected to a database. The statement did
not appear when the profiling session was saved to a file and the file then
opened. The SQL for the bogus statement was typically the word "TABLE".
This has been fixed.
================(Build #1177 - Engineering Case #785388)================
The SQL Anywhere Profiler's filter was inadvertently cleared after disconnecting
from a database. Now, the filter is not changed when disconnecting.
================(Build #1177 - Engineering Case #785385)================
The SQL Anywhere Profiler indicates intervals of expected reduced server
performance with a pale red background in the "Server Load" panel
and on the "Operations" tab. These intervals correspond to backups,
checkpoints, growing the cache, etc. The red background was being shown only
after the server operation completed. The Profiler should have (but did
not) shown it while the operation was executing. This has been fixed.
================(Build #1174 - Engineering Case #785194)================
The SQL Anywhere Profiler could have crashed if a filter was set, and a statement
in the profiled database blocked. This has been fixed.
================(Build #1163 - Engineering Case #788632)================
In the SQL Central Create Database wizard, when starting a new local server
to create the database, the server name would have defaulted to the database
file name. This could result in an invalid server name or a server name that
wasn’t recommended. For example, if the database file name contained characters
other than 7-bit ASCII. This has been fixed. Now if the database file name
isn’t a valid or recommended server name, then the wizard generates a random
server name.
================(Build #1163 - Engineering Case #788631)================
In SQL Central, if a breakpoint was deleted from the Breakpoints window
when the breakpoint's stored procedure was not selected, the breakpoint was
still shown when the procedure was subsequently selected. This has been
fixed.
================(Build #6272 - Engineering Case #824218)================
Unable to ping a HANA Data Lake server from the "ODBC Configuration
for SQL Anywhere" dialog box on Windows, even if the option "direct=yes"
is given in the "Other protocol options" editable box on the Security
page. This has been fixed.
================(Build #4829 - Engineering Case #815582)================
In rare cases, a client application that uses TLS encryption with a large
number of threads my hang. This has been fixed.
================(Build #2139 - Engineering Case #801193)================
32-bit client applications running on SPARC systems would have crashed when
connecting to the server. This has been fixed.
================(Build #1465 - Engineering Case #796899)================
The Embedded SQL function sqlda_string_length would have returned inconsistent
results for some types in certain situations. If the column in a query was
described as DT_DATE, DT_TIME, DT_TIMESTAMP, DT_NSTRING, or DT_STRING, the
length reported by this function is correct before fill_sqlda is called,
but was incorrect after fill_sqlda was called.
The following example illustrates the use of sqlda_string_length:
for( col = 0; col < sqlda->sqld; col++ ) {
sqlda->sqlvar[col].sqllen = sqlda_string_length( sqlda, col ) - 1;
sqlda->sqlvar[col].sqltype = DT_STRING;
}
fill_sqlda( sqlda );
In the above example, if sqlda_string_length is called after the fill_sqlda
call, the lengths returned are 1 greater than before.
This problem has been fixed. The sqlda_string_length function will now
account for the fact that the fill_sqlda function (or any of its variants)
has been called.
================(Build #1465 - Engineering Case #796408)================
Execution of an Embedded SQL SET DESCRIPTOR statement would have failed to
copy the last two bytes of data from a host variable of type VARCHAR or BINARY
to the SQLDA variable data array.
For example, consider the following code fragment:
static DECL_VARCHAR(17) myvc;
.
.
.
myvc.len = 17;
memmove( (char *)myvc.array, "12345678901234567", 17 );
EXEC SQL ALLOCATE DESCRIPTOR sqlda1 WITH MAX 10;
EXEC SQL SET DESCRIPTOR sqlda1 COUNT = 1;
length = 17;
EXEC SQL SET DESCRIPTOR sqlda1 VALUE 1 TYPE = 448, LENGTH = :length;
fill_sqlda( sqlda1 );
EXEC SQL SET DESCRIPTOR sqlda1 VALUE 1 DATA = :myvc;
_check_condition( SQLCODE == 0
&& strncmp( (char *)myvc.array,
((VARCHAR *)(sqlda1->sqlvar[0].sqldata))->array, 17 )
== 0 );
free_filled_sqlda( sqlda1 );
The array field ((sqlda1->sqlvar[0].sqldata))->array ) would have
contained all but the last two characters of the myvc variable.
This problem has been fixed. With the new version of DBLIB, any Embedded
SQL applications that use DECL_VARCHAR and DECL_BINARY must be recompiled
using the Embedded SQL preprocessor (sqlpp).
The Embedded SQL GET DESCRIPTOR statement, which copies data from the SQLDA
to the host variable, does so correctly.
================(Build #1403 - Engineering Case #795135)================
If a connection string contained a START= parameter which included an -ec
or -xs option containing a path and filename with spaces, a parsing error
could have been given even if the value was enclosed in quotes.
For example: UID=…;PWD=…;DBF=mydatabase.db;START=dbeng17 -xs “https(identity=my
spacey file.id;identity_password=test)”
This has been fixed.
================(Build #1164 - Engineering Case #784668)================
Support has been added to Embedded SQL for wide deletes and updates. Two
new samples have been added to demonstrate wide operations in Embedded SQL.
The examples are found in the following folders:
samples\SQLAnywhere\ESQLWideDelete
samples\SQLAnywhere\ESQLWideInsert
================(Build #6277 - Engineering Case #824318)================
When a JDBC application fetches an empty temporal (0000/00/00) as a string,
the wrong date is formatted.
This problem has been corrected. The JDBC driver will format an empty temporal
as a single space character for all DATETIME/TIMESTAMP, DATE, or TIME data
types.
This fix affects applications that use JDBC like SAP IQ?s interactive SQL
tool. A locally formatted empty temporal will display as a space.
Empty temporals are supported for HANA ES.
================(Build #5763 - Engineering Case #818849)================
When using version 11 of the Java Runtime Environment (for example, JRE 11.0.2)
and the SQL Anywhere JDBC driver (sajdbc4), a crash may occur in a JDBC application
indicating an access violation has occurred. For example, the crash report
may begin with the following lines.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000001449e080b95, pid=19604,
tid=2700
#
# JRE version: Java(TM) SE Runtime Environment (11.0.2+9) (build 11.0.2+9-LTS)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (11.0.2+9-LTS, mixed mode,
tiered, compressed oops, g1 gc, windows-amd64)
This problem has been fixed.
Also, memory allocations for data buffers were much larger than required.
The memory footprint for a running application could have been much larger
than necessary and performance of batches could be impacted.
This problem has also been fixed.
================(Build #4038 - Engineering Case #810460)================
The SQL Anywhere JDBC driver does not load when used with Java Development
Kit 9 (JDK 9) Early-Access Builds. This problem has been fixed.
================(Build #2786 - Engineering Case #804664)================
When using any of the JDBC ResultSet class "get" methods (for example,
getDouble, getFloat, getInt, etc.) on a NUMERIC or DECIMAL column with the
SQL Anywhere JDBC driver (sajdbc4), a memory leak occurred. A workaround
is to CAST the numeric/decimal column to a string in the corresponding SQL
query. For example, CAST(total_value AS VARCHAR(16)).
This problem has been fixed.
================(Build #2202 - Engineering Case #803052)================
When a JDBC application had called getTypeInfo() of the DatabaseMetaData
class, some column names are incorrect returned. The PRECISION column of
the result set were incorrectly named COLUMN_SIZE. The AUTO_INCREMENT column
of the result set were incorrectly named AUTO_UNIQUE_VALUE. The following
example would have failed.
DatabaseMetaData meta = conn.getMetaData();
ResultSet typeinfo = meta.getTypeInfo();
while (typeinfo.next())
{
System.out.printf("PRECISION=%s\n", typeinfo.getString("PRECISION"));
System.out.printf("AUTO_INCREMENT=%s\n", typeinfo.getString("AUTO_INCREMENT"));
}
This has been fixed.
================(Build #2200 - Engineering Case #802981)================
When a JDBC application had called getColumns or getProcedureColumns of the
DatabaseMetaData class, some returned metadata information were incorrect:
DatabaseMetaData meta = conn.getMetaData();
ResultSet columns = meta.getColumns(null, null, "AllTypes", null);
- COLUMN_SIZE for numeric types is the precision, or number of digits, that
can be represented. It does not include the sign. The COLUMN_SIZE reported
for BIGINT, UNSIGNED BIGINT, UNSIGNED INT, and UNSIGNED SMALLINT were incorrect.
This have been corrected from byte length to numeric precision. COLUMN_SIZE
for INTEGER, TINYINT, and SMALLINT is unchanged.
- DECIMAL_DIGITS for all exact numeric types other than SQL_DECIMAL and
SQL_NUMERIC is 0. The DECIMAL_DIGITS reported for BIGINT, UNSIGNED BIGINT,
UNSIGNED INT, and UNSIGNED SMALLINT were NULL. This has been corrected to
0.
- DECIMAL_DIGITS for SQL_TYPE_TIME and SQL_TYPE_TIMESTAMP is the number
digits in the fractional seconds component. The DECIMAL_DIGITS reported for
TIME and TIMESTAMP WITH TIME ZONE were NULL. This has been corrected to 6.
- CHAR_OCTET_LENGTH is the maximum length in bytes of a character or binary
data type column. The CHAR_OCTET_LENGTH for TIMESTAMP WITH TIME ZONE were
NULL. This has been corrected to 33.
================(Build #1467 - Engineering Case #797242)================
If the JDBC setMaxFieldSize was used to truncate the length of a binary column
transmitted from the database server to the client, a crash may have occurred
in the JDBC application. The JDBC setMaxFieldSize(int max) function sets
the limit for the maximum number of bytes that can be returned for character
and binary column values in a ResultSet object produced by this Statement
object. For example, if the binary column length is 300,000 and max is 256,
then a crash may have occurred in a getBytes call for that column. The following
is an example of a query that can produce binary column values with length
300,000.
select cast(repeat( '0123456789', 30000 ) as long binary) from sa_rowgenerator(1,4)
This problem also affected the Interactive SQL utilty (dbisql) when fetching
BINARY columns.
The problem has been fixed.
================(Build #1165 - Engineering Case #784055)================
A JDBC application could have found that fetching result sets with long varchar,
long nvarchar or long binary columns took much longer with a scrollable cursor
(i.e. an insensitive or sensitive statement) when compared to a non-scrollable
cursor (i.e. a forward only statement). This difference in performance was
most noticeable if most of the long values were smaller than 256K. The performance
issue has now been fixed and scrollable cursors now perform as well as non-scrollable
cursors.
================(Build #6295 - Engineering Case #824581)================
The error 'Permission denied: you do not have permission to select from "SYSINDEX"'
could occur when users were trying to retrieve data from a data lake IQ database
using Microsoft Excel. This issue has been fixed.
================(Build #6257 - Engineering Case #823981)================
When performing a multi-row fetch (wide fetch) using the ODBC driver, if
a SQLE_STRING_RIGHT_TRUNCATION error occurs in a row, the preceding row was
not returned as part of the result set. For example, if the rowset size is
15 and row 10 has a string right truncation error, then only 8 rows are returned
instead of 9. If the rowset size is 1 then the problem does not arise. This
has been fixed.
================(Build #6240 - Engineering Case #823660)================
The ODBC driver could crash during connect when multithreads share the same
environment handle, if encryption is used. This has been fixed
================(Build #6225 - Engineering Case #822551)================
Most ODBC metadata functions like SQLTables, SQLPrimaryKeys, SQLStatistics
and so on permit the specification of a catalog name argument. If a catalog
name is specified, the ODBC driver will return a "Driver not capable"
error. The catalog name is the database name. A connection is made to a database.
If access to a different database is required the application must disconnect
from one database and connect to the other. So in a sense, specifying a catalog
name to an ODBC metadata function is superfluous. However, it is not incorrect
to do so.
Also, JDBC metadata functions permit the specification of a catalog name
argument. If a catalog name is specified, the JDBC driver will return the
same "Driver not capable" error. The following is a sample JDBC
call.
ResultSet tables = metaData.getTables( conn.getCatalog(), "GROUPO",
"%", types );
These problem has been fixed.
The driver will now compare the catalog name to the database name and accept
it if there is a case-insensitive match. Otherwise, the driver will return
the same error as before (for example, if the database name is "demo"
but you specify "test" for the catalog name, the error is returned).
Now the result set will contain the database name in the result set "TABLE_CAT"
column (if permitted by the rules), instead of NULL. The database name will
be returned even if it is not specified as an input parameter.
These fixes apply to all ODBC metadata functions that accept a catalog name
as input or that return a catalog name (or two as in SQLForeignKeys) as part
of the result set.
The special ODBC call SQLTables(NULL, NULL, NULL, SQL_ALL_TABLE_TYPES) returns
a result set of supported table types. It does this correct, but the result
set column types should all be VARCHAR types according to the ODBC specification.
Currently the driver returns SMALLINT, SMALLINT, SMALLINT, CHAR(17), SMALLINT.
Note that SQLTables with other arguments returns appropriate column types
(like CHAR(128)).
This has been fixed.
================(Build #6223 - Engineering Case #823311)================
When using the SQL Anywhere, the following problems could occur.
1. It was possible that a crash could occur when using non-null-terminated
parameters (i.e., not SQL_NTS).
2. It was possible that a crash could occur when using the ODBC tracing
features.
3. Tracing output did not display 64-bit parameters correctly.
4. Tracing output did not include process and thread ID unless an undocumented
TraceLevel=ALL option is used. However, when this option was used, the thread
ID was not correctly displayed.
This has been fixed.
Note: These are fixes for the ODBC Driver Manager, not the ODBC Driver.
On Windows platforms, the Microsoft ODBC Driver Manager is normally used.
On Linux, and other platforms, the SQL Anywhere ODBC Driver Manager can
be used but it is not required unless you would like to use the tracing features
(TraceLevel and TraceLog).
================(Build #6217 - Engineering Case #823347)================
The SQL Anywhere ODBC driver manager could crash when it ran out of memory.
This has been fixed.
================(Build #5877 - Engineering Case #819794)================
The SQL Anywhere ODBC Driver Manager for UNIX/Linux might crash (SIGSEGV)
during a SQLDriverConnect or SQLDriverConnectW call. A stack trace shows
a fault in the strcmp function which is called from SQLDriverConnectW. This
has been fixed.
================(Build #4869 - Engineering Case #816675)================
SQLGetInfo( SQL_PARAM_ARRAY_ROW_COUNTS ) returns SQL_PARC_BATCH which means
that individual row counts are available for each set of parameters in a
"wide" operation such as INSERT, DELETE, or UPDATE. This is incorrect.
It should return SQL_PARC_NO_BATCH which means that there is only one row
count available, which is the cumulative row count resulting from the execution
of the statement for the entire array of parameters. This is how the SQL
Anywhere ODBC driver operates and the return value has been corrected to
SQL_PARC_NO_BATCH.
SQLExecute() must return SQL_NO_DATA instead of SQL_ERROR if all individual
operations return SQL_NO_DATA. For example, a wide DELETE with no matches
on any parameters in the parameter set should return SQL_NO_DATA. This has
been corrected.
SQLMoreResults() should set the parameter status array for each parameter
it processes (the second, third, etc. parameter set). It incorrectly resets
the first element of the status array for each parameter set. This problem
has been fixed.
SQLMoreResults() should not ignore the parameter operation array (SQL_PARAM_PROCEED
/ SQL_PARAM_IGNORE) to decide whether to operate on or ignore a parameter.
It incorrectly used the first element of the parameter operation array for
each parameter set. This problem has been fixed.
================(Build #4821 - Engineering Case #815389)================
For SQLGetDescField(SQL_DESC_UNNAMED), the SQL Anywhere ODBC driver always
returns SQL_UNNAMED.
For columns and parameters that have names, the ODBC driver should return
SQL_NAMED. The following is a sample code sequence for parameter 2 of a prepared
CALL statement.
SQLULEN named = 0;
SQLGetStmtAttr( hstmt, SQL_ATTR_IMP_PARAM_DESC, &hdesc, 0, NULL );
SQLGetDescField( hdesc, 2, SQL_DESC_UNNAMED, (SQLPOINTER) &named, 0,
NULL );
This problem has been fixed.
================(Build #4816 - Engineering Case #815182)================
Beginning with SQL Anywhere version 17.0.0 GA, the SQL Anywhere ODBC driver
returns an incorrect result for the SQLStatistics(SQL_INDEX_ALL) call. This
has been fixed.
================(Build #4784 - Engineering Case #813650)================
If an error occurs when inserting a batch of rows with the SQL Anywhere ODBC
driver (a wide insert), then the driver drops into single row insert mode.
If this results in all rows being inserted correctly, then the ODBC driver
should return SQL_SUCCESS, not SQL_ERROR. This problem has been fixed. The
ODBC driver will return SQL_SUCCESS if all rows are inserted without error
and SQL_ERROR if one or more rows fail insertion.
Note that returning SQL_ERROR deviates from the ODBC standard which requires
that SQL_SUCCESS_WITH_INFO be returned if some rows are successfully inserted.
================(Build #4133 - Engineering Case #813069)================
Using Microsoft’s .NET System.Data.Odbc interface, data truncation can occur
when fetching LONG VARCHAR or LONG BINARY columns that are longer than 256K.
Since LONG VARCHAR and LONG BINARY columns can range as large as 2147483647
bytes, 256K was chosen as a compromise for a default buffer length by the
SQL Anywhere ODBC driver with the provision that this value could be overridden
by setting SQL_ATTR_MAX_LENGTH with the SQLSetConnectAttr or SQLSetStmtAttr
functions.
In an ODBC application, this problem can also be handled with SQLGetData
calls to fetch the data in chunks. However, the System.Data.Odbc interface
does not provide the ability to access these features. To overcome this problem,
the application developer or user can now set SQL_ATTR_MAX_LENGTH using a
new ODBC connection parameter MaxLength.
For example, the connection string “DSN=Test17;DBN=Demo;UID=DBA;MaxLength=3145728”
sets SQL_ATTR_MAX_LENGTH to 3 MB. The MaxLength parameter can also be set
in a Data Source using the Microsoft ODBC Data Source Administrator (ODBCAD32)
or dbdsn. In an ODBC application, the MaxLength parameter value, if it is
specified, can be overridden using SQL_ATTR_MAX_LENGTH with the SQLSetStmtAttr
function.
================(Build #4069 - Engineering Case #811134)================
The SQL Anywhere ODBC driver returns 8 for the column length of a SQL_BIGINT
data type for the following ODBC API procedures and the indicated parameter
value:
SQLColAttributes(SQL_COLUMN_LENGTH)
SQLColAttribute(SQL_COLUMN_LENGTH)
SQLGetDescField(SQL_COLUMN_LENGTH)
The number 8 represents the number of bytes required for the default binding
of this data type as binary. This behavior conforms to current Microsoft
ODBC drivers.
The older ODBC 2.0 specification called for a default binding of SQL_C_CHAR
and a column length of 20. This requirement likely originated in the days
when 64-bit integer values were not supported natively by the computer processors
of the day. Visual Basic 6 Remote Data Objects (RDO) modules expect this
behavior. Visual Basic 6 was introduced around 1998 and support for it was
dropped by Microsoft in 2008.
Version 12 and earlier SQL Anywhere ODBC drivers return a length of 19 in
this situation, but this was changed in version 16 to favor conformance with
Microsoft ODBC drivers.
This problem has been addressed. In order to resume support for VB applications
using RDO, the undocumented connection parameter VBRDO can now be used to
cause the SQL Anywhere ODBC driver to revert to the ODBC 2.0 specification’s
stipulation that 20 be returned for the column length of SQL_BIGINT.
You must include VBRDO=Yes (or VBRDO=True, VBRDO=1, VBRDO=On) in the application's
connection parameters in order to obtain the old behavior that containers
like Microsoft Remote Data Control (MSRDC) require.
Example: DSN=Test17;VBRDO=Yes
When using the Microsoft ODBC Data Source Administrator to create or modify
a data source, the VBRDO parameter can be set manually on the Advanced tab.
It can also be set or queried using the SQL Anywhere dbdsn utility.
Choosing VBRDO=Yes causes SQLColAttributes(SQL_COLUMN_LENGTH), SQLColAttribute(SQL_COLUMN_LENGTH),
and SQLGetDescField(SQL_COLUMN_LENGTH) to return 20 instead of 8. Omitting
the parameter or choosing VBRDO=No preserves the current behavior.
The use of VBRDO=Yes also causes SQLGetInfo(SQL_CATALOG_NAME_SEPARATOR)
to return "." Instead of "", and SQLGetInfo(SQL_CATALOG_LOCATION)
to return SQL_CL_START instead of 0. This behavior is not new to the driver
and was required in the past to support some RDO functionality. Note that
the ODBC driver does not support catalogs since a client connection is made
to the database, not the server.
================(Build #3419 - Engineering Case #807627)================
The SQL Anywhere ODBC driver updates the parameter 1 indicator value after
executing a parameterized statement, for example, an INSERT statement, when
using SQLExecute/SQLExecDirect. This could result in memory corruption, especially
if row-wise parameter binding was used. The driver should not alter any parameter
indicator values. This problem has been fixed.
The ODBC driver could incorrectly update a parameter status array element
with SQL_PARAM_SUCCESS_WITH_INFO even though there was no corresponding diagnostic
record.If the operation is successful then the array element must be set
to SQL_PARAM_SUCCESS. This problem has been fixed.
================(Build #3417 - Engineering Case #807350)================
Fixed two problems:
1) In all clients, if a fetch that used prefetch was cancelled, the next
request could block indefinitely
2) In ODBC 2 clients (i.e., ODBC applications that do not set SQLSetEnvAttr
SQL_ATTR_ODBC_VERSION to SQL_OV_ODBC3), attempting to cancel a statement
that did not currently have a request in progress could cause unexpected
behavior if another thread was concurrently accessing the connection.
These problems have been fixed.
================(Build #2834 - Engineering Case #806066)================
When SQL_ROWVER was specified as the IdentifierType to the SQLSpecialColumns
ODBC function, a SQL error "syntax error near [[" has been returned.
When SQL_BEST_ROWID was specified, there was no syntax error. This problem
has been fixed. The JDBC DatabaseMetaData::getVersionColumns method was also
affected by this problem and has been fixed.
================(Build #2236 - Engineering Case #803871)================
The following corrections have been made to the SQL Anywhere ODBC driver.
- SQLDescribeCol(ColumnSize) for TIME was 6 and is now 15, for TIMESTAMP/DATETIME/SMALLDATETIME
was 6 and is now 26.
- SQLDescribeCol(DecimalDigits) for TIME was 0 and is now 6.
- The ODBC 2.0 SQLColAttributes(SQL_COLUMN_LENGTH) function returned a display
size for all types, and now returns the octet length for all types.
- The ODBC 2.0 SQLColAttributes(SQL_COLUMN_PRECISION) function result for
REAL was 24 and is now 7, for DOUBLE was 53 and is now 15, for TIME was 6
and is now 15, for TIMESTAMP/DATETIME/SMALLDATETIME was 6 and is now 26,
for TEXT/IMAGE was 0 and is now 2147483647.
- SQLColAttribute(SQL_DESC_DISPLAY_SIZE) for BIT was 2 and is now 1.
- SQLColAttribute(SQL_DESC_LENGTH) for NUMERIC(X,5) was X+2 and is now X.
- SQLColAttribute(SQL_DESC_PRECISION) for DATE was 10 and is now 0 (the
numbers of digits in the fractional seconds component for the SQL_TYPE_TIME,
SQL_TYPE_TIMESTAMP, or SQL_INTERVAL_SECOND data type).
- SQLColAttribute(SQL_DESC_SCALE) for TIME was 0 and is now 6. The matches
the Microsoft ODBC driver, however, the field value is undefined for this
data type.
- The SQL Anywhere ODBC driver now ensures that the ColumnSize value returned
by SQLDescribeCol() matches the value returned by SQLColAttribute(SQL_DESC_LENGTH).
- The SQL Anywhere ODBC driver now ensures that the DecimalDigits value
returned by SQLDescribeCol() matches the value returned by SQLColAttribute(SQL_DESC_SCALE).
================(Build #2203 - Engineering Case #803007)================
When an ODBC application had called SQLColAttribute, SQLColumns, SQLProcedureColumns,
or SQLGetTypeInfo, some returned metadata information were incorrect:
- FLOAT, REAL, and DOUBLE are approximate numeric data types so the SQL_DESC_NUM_PREC_RADIX
is 2 and the SQL_DESC_PRECISION field must contain the number of bits. For
FLOAT, REAL, and DOUBLE columns, SQLGetTypeInfo returns a NUM_PREC_RADIX
of 2. The reported COLUMN_SIZE were 15, 7, and 15 respectively which represents
base 10 precision. The COLUMN_SIZE has been corrected to 53, 24, and 53 respectively
which represents base 2 precision.
- For TIME columns, SQLGetTypeInfo must return a COLUMN_SIZE equal to 9
+ s (the number of characters in the hh:mm:ss[.fff...] format, where s is
the seconds precision). For SQL Anywhere, s is 6. For TIME columns, SQLGetTypeInfo
reported the COLUMN_SIZE as 8. The COLUMN_SIZE has been corrected to 15.
Corresponding corrections have been made to SQLColumns, and SQLProcedureColumns:
- SQLColAttribute(SQL_DESC_PRECISION) must return the precision in bits
when SQL_DESC_NUM_PREC_RADIX is 2. For REAL columns, 7 were reported. This
has been corrected to 24. For FLOAT and DOUBLE columns, 15 were reported.
This has been corrected to 53.
- SQLColAttribute(SQL_DESC_LENGTH) must return the PRECISION descriptor
field for all numeric types. For TIME, it must return 15 (9+6 fractional
seconds). For REAL columns, 7 were reported. This has been corrected to 24.
For FLOAT and DOUBLE columns, 15 were reported. This has been corrected to
53. For TIME columns, 8 were reported. This has been corrected to 15.
- SQLColAttribute(SQL_DESC_DISPLAY_SIZE) must return the maximum number
of characters required to display data from the column. For REAL, it is 14.
For FLOAT and DOUBLE, it is 24. For TIME, it is 9 + s (a time in the format
hh:mm:ss[.fff...], where s is the fractional seconds precision). For SQL
Anywhere s=6. For REAL columns, 13 were reported. This has been corrected
to 14 (for example, -3.40282347e+38). For FLOAT and DOUBLE columns, 22 were
reported. This has been corrected to 24 (for example, -1.7976931348623150e+308).
For TIME columns, 8 were reported. This has been corrected to 15 (for example,
23:59:59.999999).
These corrections also appear for corresponding metadata methods in the
SQL Anywhere JDBC driver.
================(Build #2200 - Engineering Case #802980)================
When an ODBC application had called SQLColumns or SQLProcedureColumns, some
returned metadata information were incorrect:
- COLUMN_SIZE for numeric types is the precision, or number of digits, that
can be represented. The COLUMN_SIZE reported for BIGINT, UNSIGNED BIGINT,
UNSIGNED INT, and UNSIGNED SMALLINT were incorrect. This have been corrected
from byte length to numeric precision. COLUMN_SIZE for INTEGER, TINYINT,
and SMALLINT is unchanged.
- DECIMAL_DIGITS for all exact numeric types other than SQL_DECIMAL and
SQL_NUMERIC is 0. The DECIMAL_DIGITS for BIGINT, UNSIGNED BIGINT, UNSIGNED
INT, and UNSIGNED SMALLINT were NULL. This has been corrected to 0.
- DECIMAL_DIGITS for SQL_TYPE_TIME and SQL_TYPE_TIMESTAMP is the number
digits in the fractional seconds component. The DECIMAL_DIGITS for TIME and
TIMESTAMP WITH TIME ZONE were NULL. This has been corrected to 6.
- CHAR_OCTET_LENGTH is the maximum length in bytes of a character or binary
data type column. The CHAR_OCTET_LENGTH for TIMESTAMP WITH TIME ZONE were
NULL. This has been corrected to 33.
================(Build #2171 - Engineering Case #802217)================
When an ODBC application calls SQLGetInfo with the SQL_IDENTIFIER_QUOTE_CHAR
option, the SQL Anywhere ODBC driver returns the single character SPACE as
a string (" ") when the database option quoted_identifier has been
set OFF.
If the database contains identifiers with spaces (for example, a table named
“My Appointments”), then the name must be quoted using double quotation marks
("), back ticks (`), or brackets ([]). However, when quoted_identifier
has been set OFF, then one of the latter two quoting mechanisms must be used
for “spacey” identifiers since "abc" is equivalent to 'abc' in
this mode. The following example shows an acceptable way to quote a spacey
table name, when quoted_identifier has been set OFF:
SELECT * FROM [My Appointments];
If you use an ODBC-based application that generates SQL (for example, Crystal
Reports), and quoted_identifier has been set OFF (perhaps inadvertently),
the generator might create an invalid SQL statement such as the following
since the “quote” character was reported to be a space character.
SELECT * FROM My Appointments;
This problem has been fixed. The ODBC driver will now return the back tick
character as a string ("`") for version 12 or later databases when
quoted_identifier has been set OFF. This means that the SQL generator might
build the following query, provided it uses SQLGetInfo( SQL_IDENTIFIER_QUOTE_CHAR
) to obtain the quoting character.
SELECT * FROM `My Appointments`;
Also, when SQL_ATTR_METADATA_ID has been set TRUE, catalog functions now
accept the quoting of identifiers as parameters using back ticks. Catalog
functions include SQLTables(), SQLColumns(), SQLTablePrivileges(), and so
on. Previously, only double quotes and brackets were supported.
================(Build #2154 - Engineering Case #801636)================
When an ODBC application using the SQL Anywhere ODBC driver was running in
ODBC 2.0 mode without the use of the Microsoft ODBC Driver Manager, some
SQLSTATE values did not match Microsoft SQLSTATE values. For example, if
the ODBC application dynamically loaded the ODBC driver, or the ODBC application
ran on a Unix platform, this difference may have been observed.
When an HY017 error was diagnosed by the SQL Anywhere ODBC driver, the corresponding
message text returned by the SQLError/SQLGetDiagRec functions was the empty
string. The message text should have been "Invalid use of an automatically-allocated
descriptor handle".
These problems have now been fixed.
================(Build #2089 - Engineering Case #799853)================
In build 2000 of the 17.0 SQL Anywhere ODBC driver, the ODBC 2.0 version
of the SQLColumns, SQLSpecialColumns, and SQLProcedureColumns functions returned
a result set with the column name “scale”. This column name was originally
in uppercase letters as “SCALE”. This has been fixed so that the column name
is now restored to “SCALE”.
In ODBC 3.x mode, this column is correctly named “DECIMAL_DIGITS”.
A work-around until the corrected version can be installed is to use a 17.0
ODBC driver earlier than build 2000.
================(Build #2088 - Engineering Case #799815)================
The 17.0 SQL Anywhere database server has made PROCEDURE_OWNER a reserved
word. For 17.0.0, the ODBC driver was changed to adapt to this new feature.
As of build 2000, the query used by the driver for the ODBC 2.0 version of
SQLProcedures did not quote the column name PROCEDURE_OWNER resulting in
a syntax error. Any ODBC application that used SQLProcedures and runs in
ODBC 2.0 mode will fail.
For example, this problem will cause ADO applications that use MSDASQL (the
ODBC driver interface) which operates in ODBC 2.0 mode to fail. The following
sample VBScript code illustrates the problem:
command.CommandText = “sp_proc”
command.CommandType = adCmdStoredProc
command.Prepared = True
WScript.Echo "Parameters Count: " & command.Parameters.Count
If the stored procedure sp_proc has 3 parameters, the Parameters.Count value
returned was 0, not 3 as it should. ADO did not indicate that an error has
occurred.
This problem has been fixed.
A work-around until the corrected version can be installed is to use a 17.0
ODBC driver prior to build 2000.
================(Build #2087 - Engineering Case #799787)================
The 17.0 SQL Anywhere database server now supports server-side autocommit.
In general, the use of server-side autocommit improves the performance of
applications. However, there are some 3rd-party frameworks, like Hibernate,
that wrap SQL statement execution in (using JDBC as an example) setAutoCommit
calls. This is equivalent to the following sample JDBC code sequence.
while( iterations-- > 0 )
{
conn.setAutoCommit( true );
stmt.execute( sql_statement );
conn.setAutoCommit( false );
}
When connected to a 17.0 database server, such a construct results in suboptimal
performance because each call to the JDBC setAutoCommit method sends a “SET
TEMPORARY OPTION auto_commit=’ON’ (or ‘OFF’) to the database server for execution.
This problem has been fixed. A new connection parameter, ClientAutocommit=yes,
can be used to cause the client JDBC- or ODBC-based application to revert
to client-side autocommit behavior. Setting ClientAutocommit=no corresponds
to the default behavior. Note that the ClientAutocommit connection parameter
can be used with version 17.0, 16.0, or 12.0.1 ODBC drivers but it has no
effect if the database server does not support server-side commits (e.g.,
16.0 or 12.0.1 servers).
Of course, a work-around for better performance would be to move the setAutoCommit
calls outside the loop. But in some 3rd-part frameworks, this might not be
possible.
conn.setAutoCommit( true );
while( iterations-- > 0 )
{
stmt.execute( sql_statement );
}
conn.setAutoCommit( false );
On Windows, the Advanced tab of the ODBC Configuration for SQL Anywhere
dialog (using the ODBC Data Source Administrator) has been updated to include
this new connection parameter.
================(Build #2087 - Engineering Case #799779)================
The default AUTOCOMMIT behavior for the SQL Anywhere ODBC driver is SQL_AUTOCOMMIT_ON.
Changing the AUTOCOMMIT setting to SQL_AUTOCOMMIT_OFF before connecting to
the data source would have caused the driver to override this setting when
connecting to a database server that supports server-side autocommit (as
version 17 servers do).
The following is a sample ODBC code sequence where this problem occurs.
rc = SQLSetConnectAttr( hdbc, SQL_ATTR_AUTOCOMMIT, (SQLPOINTER)SQL_AUTOCOMMIT_OFF,
0 );
rc = SQLDriverConnect( hdbc, (SQLHWND)NULL, ds, SQL_NTS, scso, sizeof(scso)-1,
&cbso, SQL_DRIVER_NOPROMPT );
This problem has now been fixed. A work-around is to interchange the order
of the SQLSetConnectAttr and the SQLDriverConnect calls.
================(Build #2068 - Engineering Case #798010)================
When using the SQL Anywhere ODBC driver, a SQLNativeSql call would have returned
an error if the output buffer pointer (OutStatementText) was NULL, or if
the buffer length (BufferLength) was long enough to result in a 16-bit arithmetic
overflow when calculating the buffer size required for conversion of wide
character strings to multi-byte character sets including UTF-8. These problems
have been fixed.
================(Build #1451 - Engineering Case #796700)================
If a column that is longer than the SQL_ATTR_MAX_LENGTH value (default 256K)
was bound as SQL_C_BINARY and a multi-row fetch was performed, then the ODBC
driver would have crashed.
For example, if the column in the following query was bound as SQL_C_BINARY
and the row array size was 4, then the ODBC driver would have crashed when
attempting to fetch the rowset, provided that the SQL_ATTR_MAX_LENGTH value
was less than 300,000.
select cast(repeat( '0123456789', 30000 ) as long varchar) from sa_rowgenerator(1,4)
This problem has been fixed.
Note, this problem also affects the Interactive SQL utilty (dbisql) when
fetching BINARY columns.
================(Build #1434 - Engineering Case #796090)================
Using the SQL Anywhere ODBC driver, calling SQLGetTypeInfo() would have returned
the following information in the result set when connected to an SAP IQ database
server:
TYPE_NAME=table DATA_TYPE=SQL_VARCHAR COLUMN_SIZE=32767 LP= LS= CREATE_PARAMS=
NULLABLE=1 TYPE_ORDINAL=1
The "table" type is not a suitable SQL_VARCHAR data type declarative
and is not equivalent to the "char" data type. This row should
not appear in the result set.
Using the SQL Anywhere JDBC driver, the DatabaseMetaData.getTypeInfo() call
will also include "table" in the result set when connected to an
SAP IQ database server.
These problems have been fixed.
================(Build #1422 - Engineering Case #795701)================
If the high-order byte in the val field of a SQL_NUMERIC_STRUCT was non-zero,
then the SQL Anywhere ODBC driver may not have converted the numeric value
correctly before sending it to the database server. The column value must
be bound as a SQL_NUMERIC type and be sufficiently large enough in order
for this to have occurred. For example, the representation of 31415926535897932384626433832795028.8419
in a SQL_NUMERIC_STRUCT is such that the high-order byte of the val field
is 0xec. An incorrect value would would have been stored in the table column.
This problem has now been fixed.
================(Build #1304 - Engineering Case #791481)================
When using the SQL Anywhere ODBC driver, the character size, display size,
and octet length information returned by the ODBC functions SQLDescribeCol
and SQLColAttribute were wrong for CHAR(x CHAR) or VARCHAR(x CHAR) columns
when connected to a multi-byte character set (MBCS) database using the “wide”
interface API (UNICODE mode).
Given a table with the following columns.
c_nchar nchar(42),
c_charchar char(42 char),
c_char char(126)
The c_charchar column will hold at most 42 national characters. For example,
a 932JPN database column holds 42 Japanese double-byte characters which requires
at most 84 bytes of memory to store. A UTF-8 database column holds 42 Japanese
double-byte characters which requires at most 168 bytes of memory to store
(4*42=168 is the worst-case scenario for UTF-8 surrogate code points).
For the c_charchar column, character size and display size should be 42.
Character size is the number of characters, not the number of bytes.
For the c_charchar column, the octet length is the maximum number of bytes
required to store these characters in memory on the client (e.g., number
of characters * 2 for double-byte, number of characters * 4 for UTF-8).
For a DBCS database like 932JPN, the ODBC driver reported 84 for the character
size, 84 for the display size, and 84 for the octet length. The character
size and display size were incorrect. There was no problem when the ODBC
application was compiled for and run in ANSI mode (for example, when using
SQLDriverConnectA rather than SQLDriverConnectW).
This problem has now been fixed. For each of the columns noted above, the
following is now reported.
Column 1:
SQLDescribeCol: column name = c_nchar
SQLDescribeCol: data type = SQL_WCHAR
SQLDescribeCol: character size = 42
SQLColAttribute(SQL_DESC_DISPLAY_SIZE): character size = 42
SQLColAttribute(SQL_DESC_LENGTH): character size = 42
SQLColAttribute(SQL_DESC_OCTET_LENGTH): byte size = 168
Column 2:
SQLDescribeCol: column name = c_charchar
SQLDescribeCol: data type = SQL_CHAR
SQLDescribeCol: character size = 42
SQLColAttribute(SQL_DESC_DISPLAY_SIZE): character size = 42
SQLColAttribute(SQL_DESC_LENGTH): character size = 42
SQLColAttribute(SQL_DESC_OCTET_LENGTH): byte size = 84
Column 3:
SQLDescribeCol: column name = c_char
SQLDescribeCol: data type = SQL_CHAR
SQLDescribeCol: character size = 126
SQLColAttribute(SQL_DESC_DISPLAY_SIZE): character size = 126
SQLColAttribute(SQL_DESC_LENGTH): character size = 126
SQLColAttribute(SQL_DESC_OCTET_LENGTH): byte size = 126
================(Build #1287 - Engineering Case #790651)================
When using the version 12 or 16 ODBC driver, any query that began with the
prefix “insert" was incorrectly categorized as an INSERT statement.
Beginning version 17, any query that began with the prefix “insert",
“update", “delete", or “merge" was incorrectly categorized
as an INSERT, UPDATE, DELETE, or MERGE statement. This problem has been fixed.
Note that the comparison was case-insensitive (insert, Insert, INSERT, etc.
all match).
For example, if the query “updateInventory( 100 )”s executed, the ODBC driver
would have assumed this was an UPDATE statement.
================(Build #1238 - Engineering Case #787903)================
If the StartLine (START) connection parameter contained the string “-n” anywhere
in the text, it is interpreted as if the -n option was specified. This could
have affected the final server name that was chosen.
For example:
dbisql -c "UID=DBA;PWD=sql;START=dbeng16.exe -z -o c:\y-n\output.log;Server=SRV1;
DBN=DBN1;DBF=demo.db"
This problem has been corrected.
================(Build #1232 - Engineering Case #788053)================
If a User Data Source Name (DSN) was created with the same name as a System
Data Source Name, the original System Data Source could not have been examined
or modified using the ODBC Configuration for SQL Anywhere window of the Windows
ODBC Data Source Administrator.
Furthermore, an attempt to modify the System DSN would have always resulted
in a modified version of the User DSN being written over the System DSN.
This problem has been fixed.
As a work-around, the dbdsn/iqdsn tool can be used to create/modify user
and system data sources.
================(Build #4877 - Engineering Case #816833)================
The OData Server has been upgraded to use Jetty 9.4.12.
================(Build #4860 - Engineering Case #816289)================
In some circumstances, the OData Producer would record a NullException while
closing a connection. This would result in an internal server error being
reported to the client, even though the operation completed. The exception
was introduced by CR 813769 (17.0.9.4786, 17.0.8.4154, 16.0.0.2654) when
very low values are used for the ConnectionAuthExpiry option.
This has been fixed.
================(Build #4829 - Engineering Case #815560)================
Client applications running in a browser would not be able to make OData
requests to a different origin, even when deemed safe (GET with limited headers).
For example, SQLA database operating as a webservice at https://mydomain.com:8123
cannot serve a file with script that accessed the OData Producer running
on the same SQLA engine at https://mydomain.com:8888.
This has been fixed. The OData Producer can now be configured to return
the correct headers with Cross-origin Resource Sharing (CORS) requests.
The producer will respond to Origin and Access-Control-Request-Method HTTP
request headers when the AccessControlAllowedOrigins configuration parameter
is specified. The AccessControlAllowedMethods configuration parameter can
be used to limit what HTTP methods are allowed to respond to CORS requests.
CORS is a client-side standard. The producer simply responds with the appropriate
HTTP response headers and the client decides whether to block requests.
The new configuration parameters are specified in the USING clause of CREATE
ODATA PRODUCER.
AccessControlAllowOrigins = values|*|None - Specifies allowed origins for
Cross-Origin Resource Sharing
Value is a comma separated list of origins that are allowed to access the
resources. Value * means all origins. Value None or not-specified means
do not enable for this producer. Default is to not enable.
If an allowed origin contains one or more * characters (for example http://*.domain.com),
then "*" characters are converted to ".*", "."
characters are escaped to "\." and the resulting allowed origin
interpreted as a regular expression.
Allowed origins can therefore be more complex expressions such as https?://*.domain.[a-z]{3}
that matches http or https, multiple subdomains and any 3 letter top-level
domain (.com, .net, .org, etc.).
Note: https?://mydomain.com will not be treated as a pattern because it
contains no * characters. Use http://mydomain.com,https://mydomain.com instead.
AccessControlAllowMethods = values - Specifies allowed Methods
for Cross-Origin Resource Sharing
Value is a comma separated list of HTTP methods that are allowed to be used
when accessing the resources. Default value is GET,POST,HEAD. Allowable values
are GET,HEAD,POST,PUT,PATCH,MERGE,DELETE. AccessControlAllowMethods requires
AccessControlAllowOrigins to be specified and not None.
================(Build #4084 - Engineering Case #811453)================
The encoding of ServiceRoot producer config parameter was not properly defined.
As a result, some characters in a ServiceRoot had to be encoded while others
not and some punctuation characters simply did not work. This was further
broken in build 17.0.8.4067 when spaces that previously were allowed unencoded
stopped working.
This change specifies that ServiceRoot must be a valid encoded relative
URI path component (for example spaces escaped as '%20', '%' escaped as '%25'
and non US-ASCII characters encoded UTF-8 and escaped). Service roots must
not include any of the characters :?[']#@ (neither encoded nor unencoded),
must not include the encoded version of the / character and should not include
. and .. path segments.
For backwards compatibility, unencoded spaces are still allowed but deprecated.
Use %20 instead.
This has been fixed.
================(Build #4067 - Engineering Case #811130)================
The OData Server has been upgraded to use Jetty 9.4.7.
================(Build #4056 - Engineering Case #810942)================
If an OData Producer is restarted for a service using Repeatable Requests,
Repeatable Requests will stop working. An error will appear in the producer's
log indicating the procedure odata_sys_repeatable_request_cleanup already
exists and requests will fail with the error that repeatable requests are
not enabled. Work around is to have the OData admin user create a procedure
called odata_sys_request_cleanup and an event called odata_sys_request_cleanup
(that does nothing). This has been fixed.
================(Build #4053 - Engineering Case #810820)================
An OData service, under heavy load, may have produced many log messages concerning
java.lang.NullPointerException in a TreeSet used by the ConnectionPool. This
has been fixed.
================(Build #3473 - Engineering Case #809088)================
On Windows, when trying to determine the location of the Java VM, the database
server could search the wrong location for a Java VM and, if not successful,
the unhelpful message "OData startup failure with Unknown error"
would appear. This problem has been fixed. The database server will search
appropriate locations for the JRE that is installed with SQL Anywhere. A
work-around is to set the JAVAHOME or JAVA_HOME environment variable to point
to the JRE location.
================(Build #1451 - Engineering Case #796698)================
When CSRF tokens were enabled, modify requests that failed due to CSRF issues
(expired, invalid, no cookie or missing) would not have included the HTTP
response header "X-CSRF-Token: required". This has been fixed.
================(Build #1451 - Engineering Case #796644)================
Any update requests (bind) of a principal entity which modified a navigational
property (from principal role) to a dependent entity would have ignored the
changes to that navigational property. Navigational properties that modified
from dependent role to a different principal entity where not ignored. This
has been fixed.
================(Build #1451 - Engineering Case #796643)================
Attempting to do an insert or update of an entity with a link where one of
the ends had multiplicity 0..1 could be rejected as a constraint violation.
This happened when the entity being linked to was already linked to by another
entity. The existing link must be removed to preserve the multiplicity.
This has been fixed. If the principle multiplicity is 0..1 or 1, the dependent
multiplicity is 0..1, and the dependent end is nullable, the OData Producer
will now remove the existing link.
================(Build #1428 - Engineering Case #795921)================
If a new user made many parallel requests and the metadata has not been built
for that user, the OData Producer will attempt to build the same metadata
in parallel but only keep one copy.
This has been fixed.
================(Build #1402 - Engineering Case #795072)================
Attempting to do a POST or PUT to modify a link using $links, where one of
the ends had multiplicity 0..1, could have been rejected with an invalid
cardinality error. This would have happened when the entity was already linked
to another entity and had to be detached in order to be attached to the new
one. This has been fixed. If the principle multiplicity is 0..1 or 1, the
dependent multiplicity is 0..1, and the dependent end is nullable, the OData
Producer will now remove the existing link.
================(Build #1386 - Engineering Case #794408)================
If a modification request was made to the OData Producer using the repeatable
requests feature, the response to the request could have been flushed back
to the client before the database connection was committed. If there was
an error during the commit this would have resulted in the client getting
incorrect data. This has been fixed.
================(Build #1341 - Engineering Case #792765)================
The OData Producer's processing of the GET request header “X-CSRF-Token:
FETCH” was not case insensitive. Although the documentation uses “FETCH”,
the producer looks for “Fetch”. This has been fixed, “FETCH” is now case
insensitive.
================(Build #1341 - Engineering Case #792761)================
A user’s first request could have been very slow and if there were many users
with different access permissions, users would have encountered occasional
slow requests.
On first request, the OData Producer must build the metadata for that user,
which it then caches. If there are many users with different permissions,
the cache may unload metadata for a particular user. In this case when that
user makes a subsequent request, their metadata must be rebuilt.
This has been fixed. The database query for retrieving the metadata has
been improved.
================(Build #1257 - Engineering Case #789270)================
The value of the Location HTTP header in responses to POST requests was not
properly encoded so that it could be used directly as an URL. This has now
been fixed.
================(Build #1196 - Engineering Case #786419)================
The OData Producer may have ignored a directive to accept a media type if
it had a quality score of 0. Example: "*/*;q=0". If no other suitable
media type was acceptable, the request would have failed with UNACCEPTABLE
response. This has been fixed.
================(Build #1196 - Engineering Case #786369)================
The OData Producer would have ignored HTTP ACCEPT headers when formatting
error responses. This has been fixed. If a request accepts JSON responses
instead of XML, the error will now be returned in JSON.
================(Build #1196 - Engineering Case #783811)================
Service Operations whose underlying database stored procedures containd SQL
keywords as names of the result set columns would not have been useable with
the option ServiceOperationColumnNames=database. Requests for such service
operations would have resulted in HTTP 500 - Internal Server Error. This
has been fixed.
================(Build #1196 - Engineering Case #782946)================
The OData Producer would have generated generic 'HTTP 500 Internal Server
Error' errors when there were issues with the data source (the database)
being unavailable (for example the database server was not running). Administrators
would then have needed to look up diagnostic dump files to view the actual
error. This has been fixed. For common database connection errors due to
the server being unavailable, the producer now returns an HTTP 500 error
with appropriate error message (and does not generate a diagnostic file).
The HTTP status code for some connection errors has been changed to 'HTTP
500 Internal Server Erro'r instead of 'HTTP 400 Bad Request'.
================(Build #6206 - Engineering Case #822896)================
When using the SQL Anywhere OLE DB Provider, it was possible for a SQL syntax
error to occur in the ITransactionLocal::StartTransaction method. The diagnostic
message was:
Syntax error near 'SELECT' on line 1
This has been fixed.
================(Build #6206 - Engineering Case #822895)================
When using the SQL Anywhere OLE DB Provider, it was possible for a SQL syntax
error to occur in the ITransactionLocal::StartTransaction method. The diagnostic
message was:
Syntax error near 'SELECT' on line 1
This has been fixed.
================(Build #3473 - Engineering Case #808768)================
When using Microsoft SQL Server Integration Services (SSIS, DTSWizard) to
move a table from a Microsoft SQL Server database to SAP SQL Anywhere or
SAP IQ, the OLE DB provider failed to commit the rows inserted into the table.
This problem has been fixed. The OLE DB provider will commit any uncommitted
rows, provided that a ROLLBACK has not been performed.
================(Build #1432 - Engineering Case #795979)================
When using the SQL Anywhere OLE DB provider, attempting to move forward more
than one record using the Recordset.Move function would have failed if the
cursor type was a forward-only no-scroll cursor. This problem has been fixed.
================(Build #1362 - Engineering Case #793846)================
When using a SQL Anywhere OLE DB Linked Server object from Microsoft SQL
Server, a COMMIT or ROLLBACK of a distributed transaction would have failed.
For example, when attempting to update a row in the Contacts table of the
SQL Anywhere demonstration database using Microsoft SQL Server:
begin tran t2;
update SQLATest.demo.groupo.contacts set surname = surname + t.val
from (select 2 i, '???' val) t where id = t.i;
commit tran t2;
select surname from SQLATest.demo.groupo.contacts where id <= 4;
error messages, including one indicating that the OLE DB provider “reported
an error committing the current transaction”, were displayed. This problem
has now been fixed.
Also fixed are nested transactions using ADO and native SQL Anywhere OLE
DB. Microsoft SQL Server does not support nested distributed transactions.
Note, transactions using Linked Servers are always distributed transactions.
================(Build #6258 - Engineering Case #823994)================
Trying to insert NULLs into timestamp with time zone columns through parameters
with the SQL Anywhere python driver, the engine would complain with error:
'Cannot convert integer to timestamp with time zone'. This has been fixed.
================(Build #6117 - Engineering Case #821408)================
The ICU library used by SQL Anywhere had been patched to address CVE-2020-10531.
================(Build #6102 - Engineering Case #821317)================
The libarchive library used by the SQLAnywhere UNIX installer has been upgraded
to version 3.4.2 to address reported open source vulnerabilities.
================(Build #6006 - Engineering Case #820751)================
Support was added to the SQL Anywhere C API for wide fetches.
The routines sqlany_fetch_absolute and sqlany_fetch_next can be used to
fetch batches of rows.
When successful, the return value from these routines is 1. When an error
occurs, the return value from these routines is 0.
If an error occurs in one of the rows of the result set (for example, a
conversion error or an underflow error), all proceeding rows should be available
but they are not.
This problem has been fixed. The following is a recommended method for determining
how many rows were successfully fetched before the error occurred.
When an error is diagnosed (return value 0), the sqlany_error routine can
be used to get the error code.
sacapi_i32 err_code = api.sqlany_error( sqlany_conn, err_mesg, sizeof(err_mesg)
);
If the error code is not SQLE_NOTFOUND (100), then sqlany_fetched_rows can
be used to determine how many rows were actually fetched.
sacapi_i32 fetched = api.sqlany_fetched_rows( sqlany_stmt );
For wide fetches, the return value of 0 from sqlany_fetch_absolute and sqlany_fetch_next
should be interpreted to mean "some rows may be available".
================(Build #5993 - Engineering Case #820631)================
When trying to connect to a data source, the error “Connection error: Mismatched
braces near '???'” is returned. This message should display the portion of
the connection string where the syntax problem occurs but it doesn’t. This
problem has been fixed. Now a message like “Connection error: Mismatched
braces near 'LINKS=tcpip(host=localhost;port=2639'” is returned.
As a workaround diagnostic aid, the LogFile (LOG) connection parameter can
be used and the contents of the log file can be examined after the unsuccessful
connection attempt.
================(Build #5929 - Engineering Case #820104)================
Support for wide fetches has been improved, including a fix for a software
crash. Support for bound columns has been improved, including a fix for a
software crash and a memory leak. Column DESCRIBEs are now deferred until
they are requested by sqlany_get_column_info(). This has been fixed.
================(Build #4917 - Engineering Case #817763)================
A new PHP driver with version number 2.0.18 now supports PHP 7.3 and 7.2
================(Build #4798 - Engineering Case #814467)================
The SQL Anywhere C API sqlany_get_data(a_sqlany_stmt *, sacapi_u32, size_t,
void *, size_t) method could loop forever trying to fetch a blob column from
the server if the server returned an error during the fetch. This gave the
appearance that the client application or server was hung.
This problem also affects Perl, PHP, Python, Ruby, JavaScript and any other
application programming interface that uses the SQL Anywhere C API (dbcapi).
This problem has been fixed.
================(Build #4112 - Engineering Case #812453)================
The version of zlib used by the SQL Anywhere / IQ server for data compression
has been upgraded to 1.2.11.
================(Build #3386 - Engineering Case #806359)================
The Unix installer had created a C-shell configuration script that incorrectly
includes a Bourne shell-style test statement. When run the script gave the
error: "Missing ]". This has been fixed.
================(Build #2174 - Engineering Case #802406)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.2i.
================(Build #2091 - Engineering Case #799885)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.2h. In addition, the version of the OpenSSL FIPS library has been
upgraded to 2.0.12.
================(Build #2091 - Engineering Case #799884)================
The version of OpenLDAP used by the SQL Anywhere server and client libraries
has been upgraded to 2.4.44.
================(Build #2000 - Engineering Case #799385)================
Non-threaded client application support is deprecated.
================(Build #1359 - Engineering Case #794768)================
When data was cryptographically hashed using SHA-1 or SHA-256, a small amount
of memory was leaked. This could have occurred when a connection was made
to the database server (hashing the password), when the HASH function was
used. , or in a number of other possible cases in the server.
This also applied to MobiLink server. The leak could have occurred on each
synchronization where MobiLink’s built-in authentication was used, including
if an LDAP authentication failed over to built-in authentication.
This leak has now been fixed.
================(Build #1359 - Engineering Case #794337)================
Repeatedly calling the various new PKI routines to generate key pairs, verify
or sign messages, or encrypt or decrypt data using RSA, would have caused
the database and MobiLink servers to leak memory. This problem has been fixed.
================(Build #1320 - Engineering Case #792036)================
The SQL Anywhere Cockpit no longer includes connections made by the Cockpit
itself in counts of connections to the server and in lists of connections.
Cockpit connections are also excluded from calculations used to determine
when alerts are raised. Specifically, Cockpit connections are excluded in
the following places:
- The “total connections” tile which is shown on the Home and Connection
worksets and on the property sheet for several alert types.
- The list of connections on the connections workset. This list always
excluded Cockpit connections.
- The list of “CPU Intensive Connections” on the property sheet for the
“CPU Usage is high” alert.
- The count of connections on the Overview page of the Database property
sheet.
- The list of “Long running requests” on the property sheet for the “Long
running operation” alert.
- The “Temporary File Usage” by connection list on the property sheet for
the “High temporary file usage” alert.
- The “Connections with Locked Heaps” list on the property sheet for the
“Cache panic” alert.
- Calculation of alert conditions for the following alert types: “long
running operations”, “connection blocking” and “number of connections”
================(Build #1263 - Engineering Case #789607)================
In rare circumstances, the SQL Anywhere installer on Unix could have crashed
during an upgrade. This has been fixed.
A work around is to uninstall the old SQL Anywhere software and perform
a new installation of the new software.
================(Build #1243 - Engineering Case #773002)================
Generated 64-bit MSI installs had the BIN32 directory in the PATH environment
before the BIN64 directory. Also, the path contained an extra backslash between
the SQL Anywhere directory and the BIN32 or BIN64 directories. Both these
problems have now been corrected.
================(Build #1064 - Engineering Case #786881)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.1p.
================(Build #1063 - Engineering Case #785926)================
On Mac OS X systems, failing to allocate memory could have caused the process
to crash. This applied to all processes within the SQL Anywhere product,
and has now been fixed.
================(Build #1063 - Engineering Case #785089)================
The library dbrsa16.dll was missing in the client install for SQL Anywhere
for Windows. The client install has now been modified to include this file.
================(Build #6307 - Engineering Case #824779)================
In some circumstances, the server could crash when performing an insert into
a proxy table that has a table check constraint. This has been fixed.
================(Build #6307 - Engineering Case #824752)================
If a mirror server has a large number of connections and a lot of complex
queries and procedures to execute then it may shows a bad performance. This
has been fixed.
================(Build #6307 - Engineering Case #824739)================
There was no mechanism to determine the current setting for StatisticsCleaner,
DropBadStatistics, and DropUserStatistics using PROPERTY() or sa_eng_properties.
This has been fixed. These values can now be queried as engine properties.
================(Build #6304 - Engineering Case #824698)================
In very rare cases the server may crash if a complex query with constants
on the null-supplying side of an outer join is executed on proxy tables.
This has been fixed.
================(Build #6303 - Engineering Case #824209)================
In very rare cases the server returns the assertion errors 200610 or 102300
when running an ALTER TABLE statement with an ADD column NOT NULL DEFAULT
clause and the table is not empty. This has been fixed.
================(Build #6301 - Engineering Case #824660)================
The version of OpenSSL (FIPS) used by SQL Anywhere has been upgraded to 1.0.2y.
The version of OpenSSL (non-FIPS) used by SQL Anywhere and IQ products (cert
util only) has been upgraded to 1.1.1j.
================(Build #6296 - Engineering Case #824516)================
After a page read, the server may have incorrectly not increased some page
level statistic counters and not converted cache page locks from exclusive
mode to shared mode. In very rare cases, this has led to an deadlock between
parallel worker threads of the same query execution. This has been fixed.
================(Build #6293 - Engineering Case #824583)================
When web procedure requires URI substituting parameter was called with value
containing blank, the URI built by web procedure truncated parameter value,
which caused the resource cannot be found. For example,
CREATE OR REPLACE PROCEDURE getResource(
baseUrl LONG VARCHAR,
resourceName LONG VARCHAR
)
URL '!baseUrl/schemas/!resourceName'
TYPE 'HTTP:GET';
call getResource('http://server:port', 'Ca Na Da');
Http request start line?s request target was incorrectly built.
GET /schemas/Ca HTTP/1.0
This has been fixed.
================(Build #6291 - Engineering Case #824445)================
Under very rare circumstances, the server may deadlock when executing an
GRANT EXECUTE, CREATE PROCEDUE or ALTER PROCEDURE statement that is inside
another procedure if this other procedure is concurrently executed by a different
connection. This has been fixed.
================(Build #6290 - Engineering Case #824585)================
Spatial set operations such as ST_Intersection could cause a crash in a very
rare scenario which depends upon the precise geometries involved. Also, very
particular geometries belonging to round-earth SRSs (for example, SRID 4326),
could cause a crash within GeoSinkTransfGnomonicSimple::VBuildPoint. This
has been fixed.
================(Build #6290 - Engineering Case #823743)================
In very rare cases the server may incorrectly return the assertion error
106104 "Field unexpected during compilation" for an query with
correlated subqueries. This has been fixed.
================(Build #6284 - Engineering Case #824392)================
For a locale in the southern hemisphere that observes Daylight Savings Time,
the tolocaltime function would return an incorrect local time for a given
date/time in UTC. This problem could result in local timestamp columns in
system views being incorrectly rendered from their UTC counterpart column
(xxx_utc). This has been fixed.
================(Build #6281 - Engineering Case #824404)================
The OData Server has been upgraded to use Jetty 9.4.38.
================(Build #6278 - Engineering Case #824317)================
If a .NET function is called in the CLR external environment, an automatic
commit is performed. This makes it impossible to perform a ROLLBACK of uncommitted
transactions.
Here is an example.
INSERT INTO TEST VALUES( 'data that I do not want to commit' );
SELECT stc_get_value(); // an external CLR function
ROLLBACK; // has no effect, row has been committed
This has been fixed. Calls into a CLR function will not result in a COMMIT,
unless specifically done by the user?s implementation of the function.
================(Build #6276 - Engineering Case #824295)================
The ODBC driver is unable to cancel a wide insert statement through SQLCancel,
when an error occurred during insert. This has been fixed.
================(Build #6273 - Engineering Case #824214)================
For query cursors opened with hold, schema locks on the tables in the
query are supposed to be held until the COMMIT/ROLLBACK that follows
the closing of the cursor. If a cached plan was used for a cursor that
was held open across a COMMIT or ROLLBACK, these schema locks were
incorrectly held until the cached plan was evicted from the connection
plan cache.
This has been fixed.
================(Build #6254 - Engineering Case #823925)================
When generating a temp table definition for SELECT...INTO.., nchar and nvarchar
columns could be incorrectly sized using the described size rather than the
declared size for the temporary table schema definition. This could also
result in the column type LONG NVARCHAR if the described length was more
than 32767. This has been fixed
================(Build #6254 - Engineering Case #823924)================
The sa_reset_identity procedure new_identity parameter used a default NULL.
If new_identity is NULL, the procedure will RAISERROR -20002 invalid new_identity
value. Additionally, the procedure allowed the new_identity parameter to
be a negative value but this would set the next identity value to NULL.
With this change, the default is now 0 and new_identity must not be NULL
or a negative value.
================(Build #6254 - Engineering Case #823647)================
In very rare cases the server may crash when using database variables in
a parallel query
execution. This has been fixed.
================(Build #6248 - Engineering Case #823811)================
When a client established a TLS connection, depending on the exact encryption
connection parameters used, several KB of memory could have been leaked.
This has been fixed.
================(Build #6247 - Engineering Case #823777)================
The Common Crypto Library has been updated to the latest version as part
of normal maintenance.
================(Build #6239 - Engineering Case #823611)================
When establishing a TLS connection, if the server did not respond at all,
the client have hung. This has been fixed so that the client will now timeout
after about 30 seconds for direct TLS connections ("...;ENC=TLS(...;DIRECT=YES)")
or the liveness timeout otherwise.
================(Build #6232 - Engineering Case #823524)================
If a call was made to a stored procedure defined as web client procedure
and the web server returned a 1xx (Informational), 204 (No Content), or 304
(Not Modified) responses, it would have been possible for the request to
timeout and then disconnected from the web server even if the Connection:
Keep-Alive header was on the request. This has now been fixed.
================(Build #6229 - Engineering Case #822950)================
Uncommitted operations on local temporary tables that are not declared as
NOT TRANSACTIONAL block image backups with option WAIT BEFORE START and WAIT
AFTER END. This has been fixed.
================(Build #6211 - Engineering Case #823017)================
On older versions of Linux with a large amount of memory, running xp_cmdshell
could be slow. The duration would be proportional to the amount of memory
being used by the An ISO 8601 basic date string of the form yyyymm (6 consecutive
digits) is misinterpreted as a date in the form yymmdd. The yymmdd date interpretation
has been supported by the database server long before ISO 8601 support was
added. Thus there is a conflict between the legacy interpretation and the
ISO 8601 interpretation.
For example, the SQL statement "select cast( '200503' as date )"
returns the date 2020-05-03.
To resolve this conflict, a work-around has been implemented such that a
date of the form yyyymmT is now interpreted solely as ISO 8601. This interpretation
is used since the T indicator is clearly part of the ISO 8601 standard. In
this ISO 8601 format, the day value defaults to 01.
For example, the SQL statement "select cast( '200503T' as date )"
now returns the date 2005-03-01.
Also, if the yymmdd interpretation results in a month value that is 00 or
greater than 12 then it is assumed that the date must be in ISO 8601 format
(e.g., 201302 represents ISO 2013/02/01 and not 2020/13/02 which is impossible).
Also note that a date string in the form yyyymmdd is not subject to misinterpretation
under any circumstances.
================(Build #6206 - Engineering Case #822822)================
For some string concatenation operations using || and involving NATIONAL
CHARACTER types (NCHAR, NVARCHAR, LONG NVARCHAR, etc.), the wrong string
length could be returned. In the following example, the string returned is
"aabccc" but the length returned is 3 when it should be 6.
begin
declare initStr nchar(1) = 'a';
declare sourceStr long nvarchar = repeat( initStr, 2 ) || 'b';
set sourceStr = sourceStr || 'ccc';
select sourceStr, len(sourceStr);
end;
A work-around is to use the + operator instead of the || operator. This
has been fixed.
================(Build #6203 - Engineering Case #822627)================
In very rare cases the server may crash when it try to use a hash filter
with keys that are casted expressions on an outer reference. This has been
fixed.
================(Build #6196 - Engineering Case #822456)================
The server incorrectly evaluates the arguments of an UNNEST array operator
multiple times which may result in slower performance or a server crash.
This has been fixed.
================(Build #6194 - Engineering Case #822623)================
In very rare cases, when using TDS connections with Kerberos authentication,
the server may crash. This has been fixed.
================(Build #6194 - Engineering Case #822574)================
If a binary string for an IDENTIFIED BY ENCRYPTED password contains a NULL
byte, the string is truncated prematurely at the NULL byte by the database
server.
Example:
ALTER SERVER "server_b" DEFAULT LOGIN 'user_a' IDENTIFIED BY ENCRYPTED
'\xad\x8f\x7a\x00\xe1\x3d\xb4\x1f\x93\xbc'
The database server behaves as if the following had been specified:
ALTER SERVER "server_b" DEFAULT LOGIN 'user_a' IDENTIFIED BY ENCRYPTED
'\xad\x8f\x7a'
This has been fixed
================(Build #6193 - Engineering Case #822491)================
If the server is started with option -im v (in-memory mode for database validation)
with a limited cache size then it can run out of cache space if there are
a lot of database pages to clean. This has been fixed. In mode -im v there
server will no longer run the Cleaner task.
================(Build #6192 - Engineering Case #822321)================
In very rare cases the server may crash if the procedure sa_certificate_info
is called for an bad certificate. This has been fixed.
================(Build #6191 - Engineering Case #822614)================
If there had been a massive amount of contention for pages in the database
cache and pages in the temporary dbspace, it was possible for the database
engine to have become unresponsive for a short period of time. This has
now been fixed.
================(Build #6185 - Engineering Case #822287)================
If a connection runs an sa_table_fragmentation procedure call on a very large
table and server want to execute a checkpoint or other connections want to
run DDL type statements then the server may become unresponsive until the
sa_table_fragmentation call finishes. This has been fixed.
================(Build #6180 - Engineering Case #822162)================
The server may crash if an xp_sendmail function call gets email addresses
with invalid
characters. This has been fixed.
================(Build #6175 - Engineering Case #821576)================
The server may incorrectly return the SQL code -904 "Illegal ORDER BY
in aggregate function" if a query contains two or more LIST functions
in the same query block, the LIST functions have the same or subsumed ORDER
BY clauses, and an order by expression is used in an IF expressions in the
first parameter of the LIST
function.
For example: In the following query the expression "coalesce(a1,0)"
is in both LIST functions as part of the IF
expressions and the ORDER BY.
select
list( if coalesce(a1,0) < 1 then b1 else b1*(c1/a1) endif, ',' order
by coalesce(a1,0) desc ) as col1,
list( if coalesce(a1,0) <= 1 then b1 else b1*(c1/a1) endif, ',' order
by coalesce(a1,0) desc ) as col2
from T1
This has been fixed.
================(Build #6168 - Engineering Case #821603)================
Under rare conditions, the server returns assertion error 100904 with "Error:
Deadlock detected" if the server runs as mirror server and there is
a deadlock between read-only connections and the connection that applies
the recovery log from the primary side. The problem only happens if the recovery
log applying connection has been picked to be rolled back. This has been
fixed.
================(Build #6158 - Engineering Case #821883)================
Under some circumstances, executing a SELECT statement that contains a START
AT clause can fail with the error "attempting to unlock a relocatable
heap that contains a registered yield action". This has been fixed.
================(Build #6157 - Engineering Case #821606)================
The database server does not generate a core file or minidump on disk full
conditions but it generated it on operating system disk I/O errors. This
has been disabled. The disk I/O error message and the native operating system
error code can be seen in the console log file.
================(Build #6149 - Engineering Case #821658)================
The version of OpenSSL (non-FIPS) used by SQL Anywhere and IQ/DT products
has been upgraded to 1.1.1g.
================(Build #6146 - Engineering Case #821307)================
In very rare cases the server may return assertion error 106200 "Unable
to undo index changes during rollback - Error: %s" when recovering an
update on a table with primary key or unique indexes. This has been fixed.
================(Build #6145 - Engineering Case #821051)================
Under some circumstances, executing a SELECT query with an sort could cause
a server crash. This has been fixed.
================(Build #6128 - Engineering Case #821512)================
The Common Crypto Library has been updated to the latest version as part
of normal maintenance.
================(Build #6118 - Engineering Case #821436)================
When using SQLGetData in an ODBC application, the conversion of a DOUBLE
value to a string may result in a loss of precision even if the application
specified an output buffer of significant size to hold the entire value.
This has been fixed.
================(Build #6117 - Engineering Case #820629)================
In very rare cases the server may return the assertion error 201501, 201503
or 201135 or an error 'Orphaned blob found on page <page-number> of
table "<table-name>" in database file "<database-file>"'
during or after an update statement that changes a long string value. The
problem only happens if the update also changes columns that are part of
an unique index or primary key. This has been fixed.
================(Build #6114 - Engineering Case #821268)================
If applying log operations during parallel recovery fails then the server
printed the following messages to the console log:
Invalid transaction log (id=<operation-id>, page_no=<page-no>,
offset=<offset>): identity value not found
Assertion failed: 201501 (version)[database]
Page 0xf:0xfffffff for requested record not a table page
Unfortunately, the printed log position (page number and offset) were wrong
and the
subsequent assertion error did not point to the right cause of the problem.
================(Build #6102 - Engineering Case #820842)================
In very rare cases the server may crash or return assertion error 106502,
109523 or others if a statement uses database variables and contains not
flatted subqueries. This has been fixed.
================(Build #6099 - Engineering Case #820005)================
If a Windows UNC (Universal Naming Convention) path is specified to the sp_create_directory
system procedure, the SQL Anywhere server fails to create the specified path.
It only supports paths using drive specifiers, or absolute/relative paths
other than UNC paths.
Example: select sp_create_directory('\\\\myarea.example.corp\\myfolder');
This has been fixed.
================(Build #6090 - Engineering Case #821242)================
When using the SQL Anywhere sp_list_directory system procedure, it is possible
that the database server might crash. The same problem exists in the SQL
Anywhere Directory Access Server feature. The "permissions" column
might be missing the "d" flag for directories (for example, -rwxrwxrwx
instead of drwxrwxrwx). These problems have been fixed.
================(Build #6075 - Engineering Case #819066)================
The changes for engneering case 818214 introduced a problem for date/time
strings of the format HH:MM:SS-YY/MM/DD. This format does not conform to
ISO 8601 but is in use by existing customer applications.
The original change added support for legal ISO 8601 date/time strings of
the format HH:MM:SS-HH which is time-of-day followed by a negative time zone
hours offset. However, in adding this support, existing usage was broken.
This problem has been fixed and both date/time string formats are now supported.
================(Build #6074 - Engineering Case #821178)================
The version of OpenSSL used by SQL Anywhere has been updated to 1.1.1d (non-fips)
and 1.0.2u (fips)
================(Build #6067 - Engineering Case #821106)================
SQL Anywhere server error or warning message text that contains substitutions
of user-defined identifiers or values will display, at most, the first 130
characters of the user-defined identifier or value. Before this change, it
was more likely that the insertion of a very long string into message text
(> 250 characters) would result in subsequent strings being displayed
in the message text as a series of question marks. This change alleviates
but does not entirely exclude the possibility of subsequent strings being
displayed in the message text as a series of question marks.
For example, suppose the message text is "Invalid value '%s1' for column
'%s2'" then the message was been truncated if the value and column name
were very long.
This has been fixed.
================(Build #6061 - Engineering Case #821227)================
When using the SQL Anywhere server on Linux and other non-Windows platforms,
the wrong value could be returned for the current time zone offset (Value)
in the following query.
select Value from sa_db_properties() where propname='CurrentTimeZoneOffset'
This problem has been fixed.
Since the time zone offset can change during switches to/from daylight savings
time or by virtual TIME ZONE changes (CREATE TIME ZONE is supported in SQL
Anywhere only), the database server now logs updates to the current time
zone offset in the database server console so that there is a record of the
event. The new console message has the following form: "UTC time zone
offset for "<database-name>" set to <integer-value>
minutes".
Example for North American Eastern Standard time zone: UTC time zone offset
for "demo" set to -300 minutes
================(Build #6059 - Engineering Case #820962)================
In very rare circumstances, the server could crash during optimizer join
enumeration. This has been fixed.
================(Build #6045 - Engineering Case #820750)================
Starting with builds of SQL Anywhere 17.0.10 with build numbers 5746 and
later, the use of a host variable as the delimiter argument to a LIST aggregate
function would fail, and would generate a SQLCODE of -156. This has been
fixed.
================(Build #6043 - Engineering Case #820985)================
If a database involved in replication or synchronization was rebuilt using
the CREATE ENCRYPTED DATABASE or CREATE DECRYPTED DATABASE command, the ending
log offset of the transaction log for the old database would be 30 bytes
less than starting log offset of the transaction log for the new database.
This would result in an error the first time dbremote or dbmlsync was run
against the new database. This has now been fixed.
================(Build #6035 - Engineering Case #820906)================
SQL Anywhere server may crash while processing variables. This has been fixed.
================(Build #6006 - Engineering Case #820748)================
Open source libraries have been updated to address possible security vulnerabilities:
Upgrade to libarchive 3.4.0 and bzip2 1.0.8
Upgrade to OpenLDAP 2.4.48
================(Build #6000 - Engineering Case #820265)================
In very rare situations the server may crash when running the procedure sa_stack_trace
on other connections. This has been fixed.
================(Build #6000 - Engineering Case #819931)================
In very rare cases, the server may return the assertion error 111105 "Sort
error - incorrect row count" for a query that performs a merged sort
operation. This has been fixed.
================(Build #5999 - Engineering Case #820690)================
Support for remote servers ( CREATE [REMOTE] SERVER ) has been improved.
This feature is associated with terms like "proxy tables", "remote
tables", "CIS", "OMNI", "dynamic tiering"
(DT), and "federation".
This feature uses ODBC drivers to connect to and transfer data to and from
other database management systems.
Some highlights of problems that have been addressed. In some cases, ODBC
statement handles were leaked. This has been fixed.
In some cases, an incorrect statement handle might have been passed to the
ODBC driver resulting in a crash. This has been fixed.
In some cases, a result set retrieved from a remote server may have been
incomplete. This has been fixed.
Remote procedure result sets containing a LONG NVARCHAR column could result
in an assertion. This has been fixed.
Remote procedure parameters that are of type UNSIGNED BIGINT could result
in a "value out of range for destination" error. This has been
fixed.
================(Build #5994 - Engineering Case #820591)================
In rare cases, analyzing query performance data could result in an engine
crash. This has been fixed.
================(Build #5992 - Engineering Case #820101)================
The functions NEXTVAL and CURRVAL for SQL sequences may return an incorrectly
high value when they were expected to return a negative sequence id. The
functions were also incorrectly described as UNSIGNED BIGINT instead of signed
BIGINT. This has been fixed.
================(Build #5991 - Engineering Case #820594)================
If the DB server was started and immediately stopped, it was possible in
rare cases for the server to crash. This has been fixed.
================(Build #5988 - Engineering Case #820664)================
Error messages may have incorrect or missing parameter substitutions. This
has been fixed.
================(Build #5988 - Engineering Case #820289)================
The server may crash or return the assertion errors 109523 or 109507 when
executing MUTEX or SEMAPHORE statements with indirect identifiers or indirect
owners concurrently in user events or procedures. This has been fixed.
================(Build #5986 - Engineering Case #819907)================
In very rare circumstances, the server could crash if a parallel query execution
with a parallel hash join is cancelled or fails with an SQL error. This has
been fixed.
================(Build #5972 - Engineering Case #820234)================
The server may return the assertion error 115101 "unexpected special
domain type" or
201501 "Page 0xf:0xfffffff for requested record not a table page"
if the server runs, alters or drops a stored procedure with a parameter of
type ROW, ARRAY or TABLE REF.
The problem only happens if the procedure has been created with a server
of version
17.0.10 but runs on a server with a version 17.0.9 or lower or the opposite
way around.
This has been fixed.
In existing databases you may fix the problem by rebuilding the database.
================(Build #5960 - Engineering Case #820310)================
In some cases the server might leak memory when performing table level operations.
This has been fixed.
================(Build #5955 - Engineering Case #820240)================
HTTP web service connections could stop being accepted. This has been fixed.
================(Build #5924 - Engineering Case #819814)================
In some cases, the server may crash when closing a cursor with a query plan
that had an GrByO, DistO, or JM operator underneath an Exchange operator.
This has been fixed. To work around this problem you can turn off parallel
query execution by option Max_query_tasks = 1.
================(Build #5924 - Engineering Case #819797)================
In very rare circumstances, the server could crash while running a parallel
query execution. This has been fixed.
================(Build #5924 - Engineering Case #819772)================
In very rare cases, the server may crash if a parallel query execution with
an hash join is cancelled or gets an SQL error SQLSTATE_TEMP_SPACE_LIMIT.
This has been fixed.
================(Build #5921 - Engineering Case #820001)================
The system procedure sa_materialized_view_can_be_immediate incorrectly reports
that materialized cannot be immediate if that view has an outer join with
an index WITH NULLS NOT DISTINCT. This issue has now been fixed.
================(Build #5897 - Engineering Case #819927)================
In some cases while applying recovery with parallel recovery enabled, recovery
might fail with assertion 100904 and error "Deadlock detected".
Restarting the recovery process eventually allows the recovery to complete.
This has been fixed
================(Build #5897 - Engineering Case #819896)================
The server may block other requests while parsing an INSERT statement with
a large number of value lists. This has been fixed.
================(Build #5881 - Engineering Case #819749)================
The BACKUP DATABASE command for a database with table encryption may create
a backup database file that show the assertion error 101412. The problem
happens if there are concurrent write transactions while the BACKUP BATABASE
command runs. Databases with database encryption are not affected. This has
been fixed.
================(Build #5874 - Engineering Case #819689)================
In some cases, some Web service functions may cause the server to crash.
This has been fixed.
================(Build #5874 - Engineering Case #819665)================
Under some conditions, the server may abort after printing a message like
below to the console or print another message line with
incorrect information.
Task 0x245bda0(Request task 52) is trying get forbid mutex held by task
0x2460ea0(Request task 79) for more than 60000 ms
This has been fixed.
================(Build #5867 - Engineering Case #819681)================
In some restricted cases, the predicates in a query's WHERE or HAVING clause
could be optimized such that the query's result is no longer equivalent to
the original statement. For this to occur, the WHERE or HAVING search condition
must contain the following:
- the search condition must include a top-level OR (that is, the search
condition is in disjunctive normal form (DNF).
- the condition must also include a nested OR predicate containing a series
of equality conditions involving constants that are known at compile time,
and that can be replaced by an IN-list predicate.
- one of the top-level disjunctive clauses is a contradiction, for example
WHERE NULL IS NOT NULL.
- the contradictory condition involves NULL, a scalar or aggregate function,
or a host variable.
This has been fixed.
================(Build #5855 - Engineering Case #818631)================
If a connection, which has auto-commit flag, calls a store procedure to insert
rows in a table, and if a second connection, which has isolation_level being
1, queries the same table, then the second connection will be blocked until
the first connection commits or rolls back. This has been fixed.
================(Build #5832 - Engineering Case #819238)================
If you had tried to define a materialized view as immediate refresh, it was
possible that the engine could have asserted if the materialized view had
included certain built-in functions. The assertion message would have been
“Assertion failed: 102909 - Could not generate trigger for immediately maintained
materialized view. Transaction rolled back.” This has now been fixed.
================(Build #5818 - Engineering Case #818520)================
In some restricted cases, the predicates in a query's WHERE or HAVING clause
could be optimized such that the query's result is no longer equivalent to
the original statement. For this to occur, the WHERE or HAVING search condition
must contain the following:
- the search condition must include a top-level OR (that is, the search
condition is in disjunctive normal form (DNF).
- the condition must also include a nested OR predicate containing a series
of equality conditions involving constants
that are known at compile time, and that can be replaced by an IN-list
predicate.
- one of the top-level disjunctive clauses is a contradiction, for example
WHERE NULL IS NOT NULL.
- the contradictory condition involves NULL, a scalar or aggregate function,
or a host variable.
This has been fixed.
================(Build #5807 - Engineering Case #819171)================
In very rare circumstances, the server could crash if a statement contains
an ORDER BY clause with an unquantified non-deterministic order expression.
This has been fixed.
================(Build #5793 - Engineering Case #819206)================
If an invalid server ID file is used to start TLS or HTTPS, the server would
successfully start but the first attempted connection would cause the server
to crash. This has been fixed.
================(Build #5788 - Engineering Case #819146)================
Offline password reset feature did not work if the database is encrypted.
The database server option for Offline Reset Password "-orp" did
not allow the user to specify "-ek" database option.
This has been fixed. Offline Reset Password feature now supports the encrypted
database.
================(Build #5777 - Engineering Case #818256)================
Under rare circumstances, a query having a subquery with outer references
could be optimized incorrectly. This may result in incorrect result sets
returned. This has been fixed.
================(Build #5758 - Engineering Case #820229)================
Under rare circumstances, the database server can become unresponsive while
handling a deadlock. This has been fixed.
================(Build #5758 - Engineering Case #818998)================
Under rare circumstances, the database server can become unresponsive while
handling a deadlock. This has been fixed.
================(Build #5757 - Engineering Case #818800)================
Lock tables are an internal structure used by SQL Anywhere server to record
locks on rows. At startup time, the SQL Anywhere server selects how many
lock tables to create based on how large the server appears to be. In general,
more lock tables will allow more simultaneous transactions, at a cost of
slightly higher memory usage.
On very large systems with hundreds of concurrent transactions, the selected
number of lock tables may be too low. This change allows users to see how
many lock tables were selected, to see if there are too few lock tables for
a given workload, and to override the automatically selected number of lock
tables if necessary.
New server command-line switch:
-lt <n>
Causes each database to have n lock tables. The value will be rounded up
to the nearest power of two less than or equal to 128.
New database properties:
LockTableContentionCount
Shows the number of times that significant contention has occured on any
of the
lock tables. A value larger than the number of hours the database has been
running
indicates a need to increase the number of lock tables using the -lt switch.
NumLockTables
The number of internal lock tables used by the database. These are internal
structures that are not visible to users. The server normally selects an
appropriate
number of lock tables.
================(Build #5746 - Engineering Case #819636)================
A query containing the LIST function with a delimiter containing a valid
host variable could have caused the server to return SQL code -156 error
when server was tried to create a cacheable plan. This has been fixed.
This change enforces a documented restriction for the LIST() aggregate function,
which is that the delimiter expression should be a constant, or equivalent-to-constant.
The allowed expressions are now (1) a literal constant, (2) a host variable,
(3) a procedure variable, or a (4) row variable in a row trigger.
================(Build #5746 - Engineering Case #818847)================
In some cases, starting the 32-bit SQLA server on Windows platforms might
fail with “Unable to initialize AddressSpaceManager” error message. This
has been fixed.
================(Build #5746 - Engineering Case #818597)================
In some rare situations, the query optimizer would convert a complex search
condition into an equivalent expression which would be impossible to optimize,
particularly if the query contained any inner or outer joins. This has been
fixed.
================(Build #5466 - Engineering Case #815845)================
If a database had been strongly encrypted, the SYNCHRONIZE command would
have failed because it could not decrypt the transaction logs. A call to
sp_get_last_synchronize_result() to view the results of the failed synchronization
would have shown the error “Incorrect database encryption key for database
'???'” in the results. This has been fixed by modifying the syntax of the
SYNCHRONIZE command to add the “KEY database-encryption-key” clause, allowing
users to pass the database encryption key to the SYNCHRONIZE command so that
the transaction logs can be un-encrypted. If the synchronize command is
executed on a strongly encrypted database without specifying the KEY option,
then the user will be prompted for the encryption key. The KEY clause of
the SYNCHRONIZE command is ignored if the database is not strongly encrypted.
In order to use the KEY clause on the SYNCHRONIZE command, the database must
have been initialized with version 17.0.10, or have been upgraded to version
17.0.10. A possible workaround to this issue would be to pre-start a dbmlsync
process in server mode using the -sm -po switches, as well adding the -ek
or -ep switches to provide the database encryption key to the dbmlsync process.
================(Build #5430 - Engineering Case #815435)================
After deleting many rows from a table, queries involving that table may run
slower than in previous versions of the SQL Anywhere database server (for
example, versions 16.0.0 and 12.0.1). The algorithm used for table index
prefetching has been revised further to improve performance. This change
may improve the execution time performance of some queries.
================(Build #4947 - Engineering Case #818389)================
A secure web procedure call could fail if the server returns an HTTP error
code (rather than just returning the error code in the result set). This
has been fixed.
================(Build #4944 - Engineering Case #818307)================
If the server receives a connection attempt at a very specific point during
startup, the server could crash. This has been fixed.
================(Build #4944 - Engineering Case #818277)================
If the identity file passed to the server for TLS/HTTPS contains a server
certificate that is not expired but is signed by a certificate that is expired,
the server would have accepted and used the certificate anyway. This has
been fixed.
================(Build #4944 - Engineering Case #818275)================
If the identity file passed to the server for TLS/HTTPS is in PEM format
and contains the encrypted private key first, before the server’s certificate,
the server would report that the certificate had expired. This has been fixed.
================(Build #4944 - Engineering Case #818177)================
The -z database server option is used to display diagnostic messages, and
other messages, to the server console for troubleshooting purposes.
This option has been enhanced to include server subsystem shutdown log entries.
This information is useful in the event the database server might fail to
shut down completely.
The log entries take the form of “Shutting down <sub-system-name>
at <date-time>”.
These entries indicate to SAP engineering the steps reached in the shutdown
process and do not necessarily indicate that a subsystem was running (for
example, shutting down ODataServer does not indicate that the OData Server
was actually running but that the server was at the stage where it would
be shutting down the OData Server subsystem if it were running).
Use the -o database server option with the -z option to create a database
console messages log file.
The shutdown sequence start is indicated in the log by messages like the
following.
I. 01/28 09:43:35. Database server shutdown requested by DBSTOP
I. 01/28 09:43:35. Shutting down FeatureTrackingTimers at Mon Jan 28 2019
09:43
I. 01/28 09:43:35. Disallowing new connections
and completes with the following messages.
I. 01/28 09:43:37. Shutting down CacheManager at Mon Jan 28 2019 09:43
I. 01/28 09:43:37. Shutting down FeatureLogging at Mon Jan 28 2019 09:43
================(Build #4941 - Engineering Case #818214)================
When connected to the SAP SQL Anywhere or IQ database server and an incorrect
month value of 0 was specified (for example, 1985/00/12), no error was diagnosed.
A legal ISO 8601 ordinal date of the form YYYY-DDD such as 1985-102 was
flagged in error. This date string represents 1985-04-12.
A legal ISO 8601 time with fractional seconds but no minutes or hours/minutes
such as T23.3 was interpreted incorrectly as 23:03:00.0 (it should be 23:00:00.3).
These problems have been fixed.
A date-time string of the format yyyymmddhhmmss (no spaces) is currently
supported as an extension to ISO 8601.
Now, date-time strings of the format yyyymmddhh or yyyymmddhhmm are supported
as extensions. Fractional seconds and time zone can be appended (for example,
201107260800-0500 and 2018112119.456+0100).
================(Build #4941 - Engineering Case #817784)================
In some cases, an SQL error was generated for expressions that should not
have been evaluated. For example, the following statement generated the SQL
error "Division by zero":
select if a1 = 0 then b1 else b1*(c1/a1) endif as col1,
if a1 = 0 then b1 else b1*(c1/a1) endif as col2
from T1
order by col2
This problem occurred only if the statement contained multiple not always
evaluated expressions, here b1*(c1/a1), which contained an common sub-expressions,
here c1/a1, that could be factored out to avoid multiple evaluations of the
same sub-expression.
This has been fixed.
================(Build #4938 - Engineering Case #817937)================
In some circumstances, the server could crash or evaluate an expression or
user defined function twice. This has been fixed.
================(Build #4934 - Engineering Case #818062)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.2q.
================(Build #4934 - Engineering Case #818012)================
CREATE TABLE IF NOT EXISTS fails to report SQLE_UNKNOWN_USER in cases where
the table owner does not exist. This has been fixed.
================(Build #4931 - Engineering Case #817753)================
In rare circumstances, the server could crash when performing a stored procedure
call inside another SQL statement. This has been fixed.
================(Build #4921 - Engineering Case #817669)================
The server has incorrectly returned the SQL error SQLE_CANNOT_MODIFY if a
procedure call in a trigger body took an old row column as INOUT or OUT parameter
argument. This has been fixed. To work around the problem you may define
the procedure parameter as IN or assign the old row column value to a local
variable that you use as procedure argument.
================(Build #4919 - Engineering Case #817663)================
If statement was automatically parameterized (parameterization_level Simple
or Forced) and a parameter value was larger than 250 bytes, it is possible
that a SQLE_COMMUNICATION_ERROR would be incorrectly reported. A workaround
is to set parameterization_level to Off. This has been fixed.
================(Build #4917 - Engineering Case #817690)================
The create text index statement allowed creation of a text index on an NCHAR
column with explicit specification of a char configuration, for example,
default_char. This combination of column and text configuration is not valid.
Now an error will be thrown when such combination is specified. This has
been fixed.
This issue does not affect text indexes created without explicit specification
of text configuration, or text indexes created with an NCHAR text configuration
explicitly specified.
================(Build #4917 - Engineering Case #817378)================
Some assertion error messages did not include the database name. The server
will now include the database name in the message text wherever possible.
This has been fixed.
================(Build #4915 - Engineering Case #817675)================
Server will now work correctly when ALTER TABLE … ADD NOT NULL WITH DEFAULT
clause is used in conjunction with other clauses. This has been fixed.
================(Build #4914 - Engineering Case #817630)================
In some circumstances, the server could crash when performing an insert into
a table that has a table check constraint. This has been fixed.
================(Build #4911 - Engineering Case #817368)================
Under very rare circumstances, the server may crash when closing a pooled
HTTP connection. This has been fixed. To work around the problem plan caching
can be turned off (option Max_plans_cached = 0).
================(Build #4911 - Engineering Case #816954)================
The server did not release schema locks if a DROP TABLE, DROP VIEW or DROP
MATERIALIZED VIEW statement does not found the object but an same named object
of a different object type. For example: If there is a view named X but not
a table with that name then a DROP TABLE X would leave a schema lock on view
X. Additionally, the server did not return an SQL error if IT EXISTS was
not specified and an object with expected object type was not found. This
has been fixed.
================(Build #4906 - Engineering Case #817479)================
Under some conditions combining ADD ... WITH DEFAULT alter clause with a
non-ADD alter clause can cause data corruption for a non-empty table. The
error will most likely manifest as Assertion 200610 "Attempting to normalize
a non-continued row". Under different combination of ADD ... WITH DEFAULT
and non-ADD clauses the server may crash in the middle of ALTER TABLE statement.
This problems have been prevented by temporarily disallowing combination
of ADD ... WITH DEFAULT and non-ADD clauses for non-empty tables. Server
will report "Table must be empty" error in such situations. This
has been fixed.
================(Build #4906 - Engineering Case #817458)================
When a database variable was used as an output parameter in calling procedures,
it might return a permission error like the following even though the database
variable was accessible by the user.
Permission denied: you do not have permission to update "my_dbvar"
SQLCODE=-121, ODBC 3 State="42000"
This has been fixed.
Also, it was possible that database variable access was incorrectly allowed
for different users in executing procedures. This has been fixed too.
================(Build #4905 - Engineering Case #817493)================
A Linux database server could take a long time to scan the existing transaction
logs at startup. The server would scan all logs when using the '-ad' switch.
This issue has been fixed. Prior to the fix, scanning 100 nearly-empty logs
on a fast Linux machine took about one minute but with the fix it now takes
about 0.3 seconds.
================(Build #4903 - Engineering Case #817415)================
The sp_parse_json function can be extremely slow when there are null values
in first set and many sets follow in the JSON input string. An example of
this follows:
[{a:10,b:z1,c:null}, {a:11.2,b:z2,c:301}, ...]
In this case, the algorithm performance becomes Order N-squared (O(N2)).
Instead of returning a result in seconds, it can take several minutes, depending
on the number of sets.
This problem has been fixed.
Also, an incorrect result is returned for sets where the first value is
null and subsequent values are integer, floating-point, or Boolean types.
Instead of null, the first result is 0. The following is an example:
CALL sp_parse_json('tvar', '[{x:null}, {x:1}, {x:2}]');
SELECT tvar[[1]].x,tvar[[2]].x,tvar[[3]].x;
This problem has been fixed.
If the output row/array variable (argument 1) is defined before calling
sp_parse_json, the row/array variable is usually rejected and an error is
returned. The following is an example:
CREATE OR REPLACE VARIABLE tvar ARRAY OF ROW(
a VARCHAR(32),
b ARRAY OF ROW( b1 LONG NVARCHAR, b2 LONG NVARCHAR),
c BIT,
d NUMERIC(5,2)
);
CALL sp_parse_json('tvar', '[{a:"json", b:[{b1:"hello",
b2:"goodbye"},{b1:"say", b2:"again"}], c:true,
d:12.34},
{a:"json2", b:[{b1:"hello2", b2:"goodbye2"},{b1:"say2",
b2:"again2"}], c:false, d:56.78}]');
SELECT tvar[[x.row_num]].a AS a,
tvar[[x.row_num]].b[[y.row_num]].b1 AS b1,
tvar[[x.row_num]].b[[y.row_num]].b2 AS b2,
tvar[[x.row_num]].c AS c,
tvar[[x.row_num]].d AS d
FROM sa_rowgenerator(1,CARDINALITY(tvar)) AS x, sa_rowgenerator(1,CARDINALITY(tvar[[1]].b))
AS y;
This problem has been fixed. The sp_parse_json function will now accept
a wider variety of predefined output row/array variables.
================(Build #4900 - Engineering Case #817348)================
Previously it was difficult to create an empty row variable, it required
to specify NULLs for every column in the row.
New ROW() constructor without parameter is now supported and allow the user
to create an empty row without specifying NULLs. If the field in the row
is itself a row type, it will also be initialized by the top-level constructor,
e.g.
CREATE VARIABLE myrowvar ROW ( field1 INT, field2 ROW ( id INT ) );
SET myrowvar = ROW();
SELECT (myrowvar).field1, (myrowvar).field2.id FROM dummy; // returns NULLs
Also, if the field in the row is an array, it will be initialized to be
zero-length array.
================(Build #4900 - Engineering Case #817343)================
A previous security fix to return either NULL or *** for password columns
in the catalog inadvertently changed the data type of the returned column
from binary/varbinary to char. This problem has now been corrected.
================(Build #4898 - Engineering Case #817338)================
When a field of a row variable is used as an OUT(or INOUT) parameter of the
procedure, the procedure did not return the variable properly. This has been
fixed. For example:
create procedure myproc(out id int) as
begin
set id = 3;
end;
create variable rowvar row(field1 int);
set rowvar.field1 = 1;
call myproc (rowvar.field1);
select rowvar.field1; // expect 3, it was set to 1 incorrectly.
================(Build #4894 - Engineering Case #817293)================
On Windows and Unix platforms other than Linux, SQLAnywhere only supported
up to 1024 processors. Certain virtualized or container environments (such
as Solaris zones) can report many more processors than are physically present
on the system and can assign processor IDs beyond 1024 to that environment.
In that case, the user might see messages such as the following at startup:
Processors detected: 0 logical processor(s) on 0 core(s) on 0 physical processor(s)
Processors in use by server: 0 logical processor(s) on 0 core(s) on 0 physical
processor(s)
This problem has been fixed and the new limit is 65536 processors.
================(Build #4893 - Engineering Case #817757)================
When using a PKCS 12 certificate, a memory leak can occur when connecting
to the database server. This has been fixed.
================(Build #4888 - Engineering Case #816440)================
When using the built-in web server and specifying a http log (eg, starting
the server with -xs "http(port=8080;log=http.log)"), the timestamps
in the log use local time; however, they did not adapt to Daylight Savings
Time starting or ending while the server was running. Restarting the server
after a change in DST would correct the issue.
================(Build #4887 - Engineering Case #816996)================
When the server is running on Windows 2016 server, the operating system was
reported as Windows 2012R2. This has been fixed.
================(Build #4886 - Engineering Case #817036)================
In very rare circumstances, the server may return assertion error 201503
when running an delete on a table with indexes. This has been fixed.
================(Build #4885 - Engineering Case #817080)================
Under very rare conditions server, with plan caching enabled, may crash during
shutdown with assertion 101426. This has been fixed.
================(Build #4885 - Engineering Case #816858)================
In very rare circumstances, the server could crash if a function or procedure
created with
EXTERNAL NAME 'native-call' returns the special FLOAT or DOUBLE values NAN,
INF,
and INFINITY and the value is used in an SQL expression.
The problem does not happen if the function or procedure is created as external
procedure
with EXTERNAL NAME '<call-specification>' LANGUAGE <language-type>.
Also the server has incorrectly cleared the SQL error if an function or
procedure output
parameter value could not be assigned due to a conversion or truncation
error.
These problems has been fixed.
================(Build #4881 - Engineering Case #816902)================
When running on Windows, making a connection to the database server was much
slower in v17 than in previous versions. This has been fixed.
================(Build #4880 - Engineering Case #816901)================
Shared memory connections on AIX, HP, and Linux leak 56 bytes per connection.
This has been fixed.
================(Build #4879 - Engineering Case #816892)================
The SQLSTATE is a 5-character string that is associated with an error or
warning issued by the SQL Anywhere/SAP IQ database servers.
For example, the SQLSTATE "08W29" is associated with the error
"Request to start/stop database denied".
The mapping of SQLSTATE to error/warning is required to be 1:1 by the SQL
Standard. However, some error messages that were introduced in version 16
do not have 1:1 mappings.
The following is a list of messages and their newly reassigned SQLSTATEs
that were originally all mapped to SQLSTATE "28000" ("Invalid
user ID or password").
28W25 Invalid user ID or role name '%1' specified
28W26 Role "%1" already exists
28W27 User or Role ID '%1' does not exist
28W28 Use of WITH NO SYSTEM PRIVILEGE INHERITANCE option is not allowed
with %1
28W29 Operation would cause a role cycle
28W30 Specified System Privilege '%1' is Invalid
28W31 Specified LDAP server '%1' is not found
28W32 Specified user '%1' is a role
28W33 Specified role '%1' is not a user extended as role
28W34 Specified role '%1' is a user extended as role
28W35 Use of WITH DROP OBJECTS is not allowed with '%1'
28W36 The role '%1' was not dropped because it is granted to other users
or roles. Use the 'WITH REVOKE' option to drop it
If a SQL statement signaled SQLSTATE "28000", the message returned
for that state was not "Invalid user ID or password" as it should
have been. Instead the message was "The role '%1' was not dropped because
it is granted to other users or roles. Use the 'WITH REVOKE' option to drop
it". The following trivial SQL fragment demonstrates the problem.
BEGIN
DECLARE INVALID_LOGON EXCEPTION FOR SQLSTATE '28000';
SIGNAL INVALID_LOGON;
END
These problems have been corrected.
Also, an invalid 6-character SQLSTATE "08WB10" was assigned to
the error "Unable to clean directory %1". This has been corrected
to the 5-character code "08WBA". An attempt to signal "08WB10"
results in a right-truncation of string data error.
================(Build #4877 - Engineering Case #816843)================
If "validate ldap server" failed because of a search failure, approximately
1k of memory was leaked. This has been fixed.
================(Build #4876 - Engineering Case #816797)================
A LOAD TABLE statement that uses a variable or parameter name is recorded
in the transaction log using the variable or parameter name instead of its
value.
For example, consider the following LOAD TABLE statement.
CREATE OR REPLACE VARIABLE t_filename LONG VARCHAR = 'c:\\temp\\datavalues.dat';
LOAD INTO TABLE DBA.testtbl USING FILE t_filename ENCODING 'UTF-8';
The transaction log entry will look something like this.
--BEGIN LOAD TABLE-1023-04399059098: load into table "DBA"."testtbl"
using file 'c:\\temp\\datavalues.dat' encoding 'UTF-8'
--SQL-1023-04399059202
begin
load into table "DBA"."testtbl" using file "t_filename"
encoding 'UTF-8';
end
This problem has been fixed. The value, rather than the variable or parameter
name, is now recorded in the transaction log.
================(Build #4874 - Engineering Case #816808)================
In rare cases, attempting secure LDAPUA connections using the OS certificate
store could cause a server crash. This would only happen when the server
was running on Windows. This has been fixed.
================(Build #4874 - Engineering Case #816396)================
The database server returned the SQL error "Invalid setting for option
'audit_log'" during the database upgrade if auditing was enabled in
the database and the database was of version 16 or before. This has been
fixed. To upgrade such databases you now have to turn off auditing before
upgrade otherwise the attempt to upgrade will fail with the new SQL error
"Database upgrade not possible; database has auditing enabled".
================(Build #4869 - Engineering Case #816850)================
In rare, timing dependent cases, a mirror or copy node could disconnect from
its parent and hang indefinitely attempting to reconnect back. For this to
occur, a rename had to be done on the Primary server at some point in time,
and the Primary remains active while the child tries to reconnect.
A workaround would be to restart the affected mirror or copy node database.
This bug has been fixed.
================(Build #4866 - Engineering Case #816510)================
If overlapping SYNCHRONIZE commands had been executed on the same database
server, it was possible that the dbmlsync process spawned by the database
engine would have failed to connect back to the database. A call to sp_get_last_synchronize_result()
to view the results of the failed synchronization would have shown that an
invalid userid or password had been used. This problem has now been fixed.
================(Build #4861 - Engineering Case #816502)================
The version of OpenSSL used by SQL Anywhere has been upgraded to 1.0.2p.
================(Build #4860 - Engineering Case #816182)================
The server may return the assertion errors 201503, 201501, 200608 or others
if a REFRESH MATERIALIZED VIEW statement is cancelled or otherwise fails
and the server rolls its operations back. The problem only happens if the
statement contains the ISOLATION LEVEL clause with other then SHARE MODE,
EXCLUSIVE MODE or SNAPSHOT. Immediate materialized views are not effected.
This has been fixed.
================(Build #4859 - Engineering Case #816215)================
In rare circumstances, the server could crash when using an invalid index
hint. This has been fixed.
================(Build #4859 - Engineering Case #816120)================
In very rare circumstances, the server could be unresponsive while merging
the hash tables of a parallel hash join. This may happen if the hash table
merge takes a long time and another connection run a DDL statement or checkpoint.
This has been fixed. To workaround you may set Max_query_tasks to 1.
================(Build #4859 - Engineering Case #814460)================
In very rare circumstances, the server could return the SQL error "Assertion
failed: 106105 Unexpected expression type dfe_Quantifier while compiling"
for a query with subselects. This has been fixed.
================(Build #4859 - Engineering Case #800146)================
In very rare circumstances, if a view which cannot be flattened (e.g. a grouped
view) is used in a statement and the view's select list is simplified during
query rewrite optimization, the SQLA Optimizer may generate an a invalid
query plan and the execution of this plan cause a server crash. This has
been fixed.
================(Build #4857 - Engineering Case #817361)================
In some cases, the server could choose a less-optimal plan for a query with
a join predicate. This has been fixed.
================(Build #4857 - Engineering Case #816228)================
The fix for QTS 811513 introduced a regression in how a histogram over a
string column is used to estimate the number of distinct values. This regression
could result in a significant under-estimate in certain cases, which could
negatively affect the optimization of queries containing grouping and/or
join predicates over string columns. This has been fixed.
================(Build #4856 - Engineering Case #814964)================
In some circumstances the SQL Anywhere query optimizer could over-estimate
the size of a FK-PK join. While the impact of the over-estimate may have
been slight for the particular join affected, the error could multiply through
the rest of the join strategy and result in a sub-optimal query plan. This
has been fixed.
================(Build #4853 - Engineering Case #816152)================
Work that was done for System Replication in DT caused a regression in the
mirroring statistics when used in combination with parallel recovery, so
parallel recovery was disabled on the high availability mirror and copy nodes.
This regression has been addressed and parallel recovery has been re-enabled
on all mirroring nodes.
This change should improve performance of mirror and copy nodes, and reduce
delays on primary servers that are replicating to other servers synchronously.
================(Build #4850 - Engineering Case #815994)================
In some circumstances the ROUND function on an NUMERIC expression was not
performed. This may have happened if second parameter of ROUND was a positive
value that was equal to the scale of the result type of the NUMERIC expression.
This has been fixed.
================(Build #4847 - Engineering Case #816285)================
The database cleaner used to always do a commit when it completed, regardless
of whether it was required or not. These commits will now be skipped if they
are unnecessary.
================(Build #4843 - Engineering Case #816229)================
Some server internal database user functionalities did not perform optimally,
which could have been exhibited by slow execution of certain operations on
a server that has a large number of database users and a high volume of user
activities. This has been fixed.
================(Build #4834 - Engineering Case #815957)================
If the system is running very low on memory, the database server could crash
when TLS connections are received. This has been fixed.
================(Build #4833 - Engineering Case #815728)================
When trying to start the Java external environment, the error "Cannot
start Java external environment" can occur on Windows when SQL Anywhere
version 17 software is installed in the default location (C:\Program Files\SQL
Anywhere 17). The error is likely to occur for databases that have been upgraded
through several SQL Anywhere versions (for example, version 11 to version
17). This has been fixed.
Also, the Java external environment launch time has been improved for all
platforms.
================(Build #4830 - Engineering Case #809067)================
In very rare circumstances, the server may return assertion error 104904
while or shortly after running the procedure sa_index_density. This has been
fixed.
================(Build #4829 - Engineering Case #815321)================
Under very rare circumstances, the server could crash when closing a cursor
on a select that uses a parallelized index only scan. To workaround the problem
the customer may set Max_query_tasks to 1. This has been fixed.
================(Build #4826 - Engineering Case #815358)================
The system functions USER_NAME and SUSER_NAME return the SQL error "Value
<value> out of range for destination" if the argument does not
fit into data type signed integer. This has been fixed. Now the functions
USER_NAME and SUSER_NAME take an UNSIGNED INT parameter and USER_ID() and
SUSER_ID() return an UNSIGNED INT value.
================(Build #4826 - Engineering Case #815171)================
In some circumstances, the server could crash if a stored procedure contains
an object with a composite data type. This has been fixed.
================(Build #4816 - Engineering Case #815108)================
If a CREATE CERTIFICATE <cert-name> FROM <variable> statement
is executed and the string stored in the variable is longer than fits into
a database page size then the server writes the variable name instead the
variable value into the transaction log. If the transaction log is applied
then the variable does not exist and an assertion 100948 is raised. This
has been fixed.
================(Build #4814 - Engineering Case #815162)================
Some queries may run slower than in previous versions of the SQL Anywhere
database server (for example, versions 16.0.0 and 12.0.1). The algorithm
used for table index prefetching has been revised to improve performance.
This change may improve the execution time performance of some queries.
================(Build #4812 - Engineering Case #814984)================
If the server applies changes from a transaction log and the transaction
log contains a CREATE CERTIFICATE statement with FROM FILE clause then an
assertion error 100948 is returned if the certificate name needs to be delimited.
This has been fixed.
================(Build #4812 - Engineering Case #814791)================
In some circumstances, the server could return an assertion error 200610
when executing an ALTER TABLE that changes the data type of a column that
is part of an text index. This has been fixed.
================(Build #4807 - Engineering Case #814914)================
In rare cases the SQLA server hangs. This can happen when a processor is
removed from the system and there are many requests executing. This has been
fixed.
================(Build #4806 - Engineering Case #814840)================
Attempting to connect to an LDAP server may fail for server discovery if
the LDAP server is configured to require LDAP protocol version 3. LDAPUA
is not affected. This has been fixed.
================(Build #4802 - Engineering Case #814431)================
In very rare circumstances, the server could crash or return assertion error
109523 when executing a stored procedure that contains a SELECT statement
with ROLLUP, CUBE or GROUPING SET feature. This has been fixed.
================(Build #4802 - Engineering Case #808572)================
In rare circumstances, the server could crash when executing a recursive
query. This has been fixed.
================(Build #4799 - Engineering Case #814491)================
The server returns an SQL syntax error "The variable '<name>'
must not be NULL in this context" during database upgrade or when executing
a web service function if the value of one of the clauses, for example the
CERTIFICATE clause, evaluates to a string that does not fit into a database
page size. This has been fixed.
================(Build #4798 - Engineering Case #814523)================
The version of OpenSSL used by all SQL Anywhere and IQ products has been
upgraded to 1.0.2o.
================(Build #4798 - Engineering Case #814464)================
A Transact-SQL (T-SQL) query that ends with a semicolon is diagnosed with
"Syntax error near ';'". For example, this query uses T-SQL syntax
and ends with a semicolon.
WITH temvpiew (Surname, Givenname) AS
(SELECT Surname, Givenname, SUM(ID) FROM Customers WHERE ID > 100 AND
ID < 200
GROUP BY Surname,Givenname)
SELECT 'Last'=Surname, 'First'=GivenName, Str = Street FROM Customers;
If the T-SQL statement is prepared using an application framework like ODBC,
an error is diagnosed. A work-around is to remove the semicolon or to rewrite
the query removing the T-SQL aspects.
This problem has been fixed.
================(Build #4797 - Engineering Case #814396)================
If a database created with v16 has table encryption enabled, upgrading the
database using a v17 server would result in a database that cannot be started.
This has been fixed.
================(Build #4797 - Engineering Case #814259)================
In very rare circumstances, a query may fail with assertion error 106105
"Unexpected expression type dfe_PlaceHolder while compiling". A
workaround is to disable intra-query parallelism for the affected queries
(i.e. set option MAX_QUERY_TASKS=1 for the affected
query/connection). This has been fixed.
================(Build #4794 - Engineering Case #814426)================
In some out of disk space situations server may behave incorrectly. This
has been fixed.
================(Build #4784 - Engineering Case #813678)================
Currently a Microsoft SQL Server table that has a DATETIMEOFFSET column cannot
be migrated to a SQL Anywheredatabase.
This problem has been fixed. Support has been added for the Microsoft SQL
Server DATETIMEOFFSET data type. This data type is represented as TIMESTAMP
WITH TIME ZONE in SQL Anywhere/SAP IQ databases. There are several ways to
migrate foreign tables to a database. The SQL Central "Migrate Database
Wizard" is one of these.
================(Build #4783 - Engineering Case #813493)================
The error message of some assertion errors has been improved to provide more
information. The new message format contains a suffix " - Error: %s"
with the SQL error the caused the assertion
================(Build #4766 - Engineering Case #813351)================
If a web procedure call is cancelled, it was possible for the return code
from the call to be SQLE_INVALID_STATEMENT rather than the expected SQLE_INTERRUPTED.
This has been fixed.
================(Build #4748 - Engineering Case #813655)================
Dynamic cache resizing might reduce the cache size below the minimum cache
size limit (-cl). This has been fixed.
================(Build #4593 - Engineering Case #816242)================
Shutting down a database that has a large number of loaded tables and/or
procedures takes a long time. This has been fixed.
================(Build #4343 - Engineering Case #803188)================
If external environment calls had been made using different external environments,
then an error could have occurred. For example, a mix of calls to methods
in JAVA and C_ESQL32 external environments, or a mix of calls to methods
in C_ODBC64, PHP, and JAVASCRIPT external environments, and so on could have
resulted in an error. One example of an error message was "The definition
of temporary table 'ExtEnvMethodArgs' has changed since last used".
However, other messages related to the temporary table may have been appeared
as well. The ExtEnvMethodArgs temporary table is used to communicate argument
information between the database server and the external environment. This
problem has been fixed.
================(Build #4143 - Engineering Case #813342)================
In some rare circumstances, the server could be unresponsive while running
a sa_locks procedure call. This has been fixed.
================(Build #4140 - Engineering Case #813094)================
Under very rare circumstances, the server may hang if a DDL statement, a
procedure call in a TDS based connection, and a select that other connections
properties queried run concurrently. This has been fixed.
================(Build #4134 - Engineering Case #812925)================
The error message for assertion error 101413 has been improved to provide
more information. The new message format is "Unable to allocate a multi-page
block of size %lu bytes".
================(Build #4131 - Engineering Case #812929)================
In rare cases the evaluation of spatial predicates may cause an out of memory
error. This has been fixed.
================(Build #4129 - Engineering Case #812883)================
Under some conditions when ALTER TABLE statement with multiple clauses, including
at least two ADD with default and non ADD clause, is executed on non-empty
table server may assert with "Internal database error *** ERROR ***
Assertion failed: 200610(16.0.0.2222) Attempting to normalize a non-continued
row". After the server crash database can start up normally. The issue
can be worked around by splitting ADD and non-ADD clauses of the ALTER TABLE
statement. This has been fixed.
================(Build #4128 - Engineering Case #812813)================
In rare cases queries that use parallel index scan can crash with an assertion
indicating bad page lookup. The problem can be worked around by turning off
query parallelism. This has been fixed.
================(Build #4114 - Engineering Case #812385)================
When using jConnect with a SQL Anywhere or SAP IQ database server, an attempt
to update a column defined as BIGINT using an updateable ResultSet object
may fail with the error message "Not enough values for host variables".
This problem was introduced in 17.0.6.2783 as part of an update to the TDS
protocol support. A temporary work-around may be to use an UNSIGNED BIGINT
or INTEGER instead. This problem has been fixed.
================(Build #4108 - Engineering Case #813034)================
When performing a point in time recovery to a provided timestamp, the server
may erroneously report that a recovery was being attempted to a point in
time earlier than when the original backup was taken. The server was comparing
the given timestamp, as provided in UTC or converted to UTC, against the
backup's checkpoint timestamp which was in local time. This issue would affect
users who are in a time zone ahead of UTC. This issue has been fixed.
================(Build #4104 - Engineering Case #811902)================
The bypass builder was failing to add IS NOT NULL prefilter predicates on
comparisons between a nullable column and a nullable expression that was
not known at open time. If the bypass built an index plan, the sargable
predicate on the column was matching NULL==NULL when the expression evaluated
to NULL. This has been fixed.
================(Build #4103 - Engineering Case #812030)================
The SQL function NEWID() may return duplicate values if they are executed
below an
Exchange query plan node of a parallel query execution. This has been fixed.
================(Build #4099 - Engineering Case #811905)================
The first time a call is made into the Java external environment, an automatic
commit can occur. The following is an example.
CREATE TABLE test (string char(30));
INSERT INTO test VALUES('one');
SELECT JavaFunc();
ROLLBACK;
SELECT * FROM TEST;
In this example, the ROLLBACK has no effect because of a COMMIT occurring
during execution of JavaFunc.
This problem has been fixed.
================(Build #4099 - Engineering Case #811903)================
When DATEADD is used to add/subtract months or quarters to a date/time value
and the result should be '0001-01-01 00:00:00.000', an "out of range"
error results. The following is an example.
SELECT DATEADD( mm, 0, '0001-01-01 00:00:00.000' );
This problem has been fixed.
================(Build #4099 - Engineering Case #811890)================
In rare cases query plans using index scans could be slow or in very rare
cases cause a crash with assertion number 200130. This has been fixed.
================(Build #4095 - Engineering Case #811759)================
If a database created with a version of SQLA prior to v17 was upgraded with
a v17 server, OData producers created in the database would only persist
as long as the database is running. Once the database is stopped and restarted,
the producers would continue to appear in the system tables but would not
run. This has been fixed.
================(Build #4095 - Engineering Case #811731)================
Attempts to grant the SYS role to a user or another role would fail with
a permission denied error if the database was previously upgraded from version
12 or below. This has now been fixed.
================(Build #4092 - Engineering Case #808352)================
Under rare circumstances, updates on materialized views may cause database
assertion 200602. This has been fixed.
A workaround is to disable plan caching.
================(Build #4086 - Engineering Case #811513)================
The query optimizer uses estimates of predicate selectivity when performing
cost-based index selection. In certain scenarios the optimizer must estimate
the expected selectivity of an equality predicate on a column without knowing
the runtime value of the comparison. This changes modifies how equality
predicate selectivity is estimated for columns participating in keys in order
to make the estimate more robust in the case of highly-skewed data distributions.
In the case of a table with a multicolumn key in which some of the key columns
have highly skewed data distributions, this will cause the optimizer’s index
selection to be more conservative to favour a key index over a secondary
index that has better expected average case performance but substantially
worse worst-case performance.
================(Build #4085 - Engineering Case #811587)================
In rare cases a server may hang while trying to estimate a selectivity of
a particular predicate using an index. The hang is the result of a deadlock
between cleaner process and index based selectivity estimation. This has
been fixed.
================(Build #4085 - Engineering Case #811517)================
If no path is specified in the EXTERNAL NAME clause of a CREATE FUNCTION
or CREATE PROCEDURE statement for LANGUAGE CLR, the error "Object reference
not set to an instance of an object" is issued when the SQL procedure
or function is called. The following is an example of a SQL FUNCTION that
interfaces to a CLR function.
CREATE OR REPLACE FUNCTION clrTable( IN tid INT )
RETURNS BIT
EXTERNAL NAME 'TableIDclr.dll::TableID.clrTable(int) bool'
LANGUAGE CLR
A work-around is to include a file path to the file (for example, .\TableIDclr.dll).
This problem has been fixed.
================(Build #4083 - Engineering Case #811510)================
When executing queries with option ANSINULL=OFF, in very rare cases the optimizer
could make a poor index choice for a simple single-table primary key lookup
query. This has been fixed.
When executing simple single-table queries, cost-based query optimization
is bypassed in cases where the query parser classifies the query as simple
enough to deterministically generate a plan without needing to estimate the
selectivity of query predicates (e.g., a simple single-table lookup with
a fully-specified primary key). However, when executing with option ANSINULL=OFF,
all queries are fully optimized in order to handle the special semantics
of NULL values dictated by ANSINULL=OFF. In the case that a query that had
been classified as eligible for simple bypass, the resulting cost-based optimization
did not take into account the runtime values of query parameters or host
variables, resulting in occasional bad index selection due to poor selectivity
estimation. The problem would be particularly pronounced for a table with
a multi-column primary key where an index exists on a subset of key columns
that have a highly skewed key distribution.
================(Build #4077 - Engineering Case #811484)================
The ESQL and ODBC external environment support module, dbexternc17.exe, crashes
with a heap corruption error when an input LONG VARCHAR argument is longer
than 32752 bytes.
The following is an example of a SQL procedure that acts as the interface
to an external procedure written in C and the CALL to that procedure that
results in a crash.
CREATE OR REPLACE PROCEDURE Ctest ( IN inString LONG VARCHAR )
EXTERNAL NAME 'SimpleCProc@c:\\c\\extdemo.dll'
LANGUAGE C_ESQL64;
CALL Ctest( repeat( 'X', 32753) );
This problem also exists in SQL Anywhere version 16 software (dbexternc16.exe).
This problem has been fixed.
================(Build #4077 - Engineering Case #811362)================
If a connection is attempting to start or stop a database with an alternate
server name while another connection to the same server is attempting to
start or stop a TCP listener, the server could block forever. This has been
fixed.
================(Build #4077 - Engineering Case #811205)================
In some circumstances, the server could crash running an update on an outer
join. This has been fixed.
================(Build #4075 - Engineering Case #811361)================
In rare cases using the SQLA Profiler on a busy server might cause a server
crash. This has been fixed.
================(Build #4075 - Engineering Case #811326)================
Running a TRUNCATE TABLE on a global temporary 'share by all' table might
cause a server crash. The server crash can be manifested in different server
assertion. Example of some possible assertions:
Assertion: 201501 (Page 0xf:0x… for requested record not a table page)
Assertion: 201135 (page freed twice)
Assertion: 201503 (Record 0x.. not present on page 0xf:0x… )
The key in these assertions is that the page id starts with 0xf. This indicates
a temp file page which is where global temporary tables reside. The table
would have been created as follows: CREATE GLOBAL TEMPORARY TABLE <table_name>
(...) ... SHARE BY ALL.
A workaround for this bug is to use DELETE FROM <table_name> statement
followed by COMMIT. This has been fixed.
================(Build #4061 - Engineering Case #810538)================
When running on recent Linux versions, shared memory connections that cross
between 32-bit and 64-bit client/server may hang. This has been fixed.
================(Build #4061 - Engineering Case #810400)================
In some circumstances, the server could return the assertion errors 201501
or 201503 for a validate table with snapshot. This has been fixed.
================(Build #4053 - Engineering Case #810481)================
In some circumstances, the server could crash when using %TYPE and %ROWTYPE
types. This has been fixed.
================(Build #4043 - Engineering Case #810648)================
The sa_get_table_definition built-in system procedure should return the SQL
statements required to create the specified table and its indexes, foreign
keys, triggers, and granted privileges. Currently, it is not including foreign
key constraints. This problem has been fixed. This fix will also revert dbunload
to its earlier behavior where the unloading of a subset of tables (-t option)
could include foreign key references to tables that are not included in the
unload.
================(Build #4042 - Engineering Case #810834)================
Poor performance may have happed on queries that involve index scans. The
performance hit was more visible when there were many concurrent connections
accessing keys that are of small proximity of each other in the index. Other
observed behaviors included server lookup and hangs. This has been fixed.
================(Build #4040 - Engineering Case #810484)================
The error message for assertion error 101412 has been improved to provide
more information. The new message format is "Page number on page (0x%x:0x%x)
does not match page requested (0x%x:0x%x) on database %s". This has
been fixed.
================(Build #4040 - Engineering Case #810477)================
In some circumstances, the server could return an misleading SQL error when
trying to access a remote procedure for that the user has no permissions.
This has been fixed.
================(Build #4033 - Engineering Case #810165)================
If the option temp_space_limit_check has been set to 'On' and option max_temp_space
to a non-zero value then the server may not have respected the quota or returned
the non-fatal Assertion error 111111 "Sort error - could not add long
hash row to run". This has been fixed.
================(Build #4022 - Engineering Case #810189)================
It is possible that the special value PROCEDURE OWNER or PROCEDURE_OWNER
when used in a SELECT list in a stored procedure returns the error "Cannot
convert 'user-id' to integer". The following is an example.
CREATE OR REPLACE PROCEDURE test()
SQL SECURITY DEFINER
BEGIN
DECLARE LOCAL TEMPORARY TABLE lcl(ID INT);
SELECT PROCEDURE OWNER;
END;
SELECT * FROM test();
This has been fixed.
================(Build #4022 - Engineering Case #810153)================
The WITH DATA clause for DECLARE LOCAL TEMPORARY TABLE statement was not
working properly in executing a procedure. Local temporary table were created
but no data was loaded. This has been fixed.
================(Build #4009 - Engineering Case #809863)================
Under exceptional rare circumstances, the server may loop infinitely during
an index backward scan. This has been fixed.
================(Build #4009 - Engineering Case #809817)================
In very rare circumstances, the server could crash during rewrite optimization
when inferring predicates if an SQL error SQLSTATE_SYNTACTIC_LIMIT was set
or the statement had been cancelled. This has been fixed.
================(Build #4005 - Engineering Case #809745)================
On Windows only, if the SQL Anywhere or SAP IQ database server is running
locally (that is, not as a service) and the process owner/user logs off or
restarts the computer, the database server is not shut down cleanly. When
the database is restarted, the database server puts the database through
a recovery process.
This problem has existed since 16.0.0 GA and does not affect 12.0.1 or earlier
versions. A work-around is to manually shut down the database server before
logging off or shutting down the computer.
This problem has been fixed.
================(Build #4005 - Engineering Case #809676)================
Under some circumstances, the server may have crashed when running the system
procedure sa_get_histogram(). This has been fixed.
================(Build #3997 - Engineering Case #809120)================
The SQL Anywhere database server may have crashed with a memory protection
violation at a non-NULL address when executing many concurrent procedure
calls that use EXECUTE IMMEDIATE to execute SQL batches.
================(Build #3903 - Engineering Case #806950)================
The SQLA MobiLink server on Windows now supports SQL server databases running
in Microsoft Azure:
1) Setup file:
Due to some differences between a regular SQL server and a SQL server running
in Microsoft Azure, the MobiLink server setup script file has been modified.
Now the SQL server running in Microsoft Azure shares the same MobiLink server
setup script file as the regular Microsoft SQL server. The filename of the
setup script is syncmss.sql and it can be found under MobiLink\setup under
the SQLA installation. Please only use the one that comes with the SQLA
installation image.
The setup script file can be applied to a Microsoft Azure SQL server database
through the application named Microsoft SQL Server Management Studio;
2) ODBC driver:
The recommended ODBC driver for Microsoft Azure is “ODBC Driver 13 for SQL
Server”. This ODBC driver can be downloaded from the Microsoft download
site. The other SQL Server ODBC drivers are not tested and are not recommended;
3) Behaviors that differ from the regular SQL server:
a) Blocking behavior
By default, uncommitted operations (insert, update, and deletes) will not
prevent any other connection to access the same tables in Azure. This behavior
may differ from the regular Microsoft SQL server. Therefore, the -dsd (disable
snapshot isolation for download) option should not be used, if the timestamp-based
download logic is used in the synchronization. Otherwise it could cause
data-inconsistency;
b) Column default values
Default values support literals and constants only in Azure. Non-deterministic
expressions or functions, such as GETDATE() or CURRENT_TIMESTAMP, are not
supported.
================(Build #3808 - Engineering Case #785008)================
Authentication may fail when using PAMUA.
================(Build #3473 - Engineering Case #808800)================
In some circumstances, the server could crash when executing procedure xp_startsmtp.
This has been fixed.
================(Build #3469 - Engineering Case #808726)================
If the statement CREATE STATISTICS had been executed for an external table
then the server returned correctly the SQL error code -660 but generated
the totally ridiculous error message "Query Decomposition: Unknown Stmt
Type". This has been fixed.
================(Build #3461 - Engineering Case #808540)================
With the introduction of the new SQLA heap manager in 17.0.6, only specific
pages sizes were supported on different platforms. Now the 64-bit server
is able to start under 4K, 8K, or 64K OS page sizes. The 32-bit server only
runs with a 4K OS page size.
================(Build #3421 - Engineering Case #806294)================
Under exceptional rare circumstances, the server may crash during a close
cursor if all of the following conditions are true:
- The cursor's query use an index on a local temporary table.
- The public option auto_commit_on_create_local_temp_index is set to Off
(Default).
- The option ansi_close_cursors_on_rollback is set to Off (Default).
- There was an rollback opening the cursor and before closing it.
This has been fixed. To work around the problem one of the above options
should be changed to On.
================(Build #3386 - Engineering Case #806631)================
When using dbcapi to execute a wide insert, an error occurred if one of the
rows had a null value for a column where other rows did not. This has been
fixed.
================(Build #3385 - Engineering Case #806478)================
The server crashed on startup if variable SADIAGDIR used a directory with
a trailing slash or backslash. This has been fixed.
================(Build #2845 - Engineering Case #806167)================
Under very rare circumstances, the server may hang when executing alter view
statements. This has been fixed.
================(Build #2828 - Engineering Case #805878)================
Server could have been crashed when profiling is turned on with specific
settings. This has been fixed.
================(Build #2827 - Engineering Case #805783)================
Under exceptional rare circumstances, the server could have hang when executing
user event actions with complex action code. This has been fixed.
================(Build #2825 - Engineering Case #805917)================
Under some workloads on the transactional global temporary tables that have
unique indexes server could have crashed when applying UNDO log. The server
would crash with assertion *** ERROR *** Assertion failed: 100706 (17.0.4.2129)
Unable to find table definition for updated record in rollback log. This
has been fixed
================(Build #2825 - Engineering Case #805695)================
The server could have failed to recover a CREATE INDEX statement that contained
both WITH NULLS NOT DISTINC and IN <dbspace>. This has been fixed.
================(Build #2825 - Engineering Case #805323)================
Under exceptional rare circumstances, a query with very large nested expressions
could not have been canceled and other database requests could have been
blocked. This has been fixed.
================(Build #2817 - Engineering Case #805455)================
The runtime of procedure sa_get_request_profile could have been very long
on large request files. The performance of the procedure sa_get_request_profile
has been improved. For existing databases the customer need to run a database
upgrade to get the new system procedures.
================(Build #2816 - Engineering Case #805460)================
An incorrect result could have been returned for queries containing a spatial
predicate if the optimizer chose a plan that used a multicolumn index that
included the geometry column. In certain cases, equality predicates on any
columns that precede the geometry column in the index would not have been
evaluated, causing too many rows to be returned. This has been fixed.
As a workaround, customers can either drop the multicolumn index or else
add a query hint to force selection of a different index.
================(Build #2812 - Engineering Case #805247)================
In certain circumstances, a LOAD TABLE could have caused the server to spin
at 100% CPU indefinitely. This problem has been fixed.
================(Build #2795 - Engineering Case #804480)================
If an integrated login with a login name containing a backslash character
had been added to a database then the schema file reload.sql created from
that database would have contain an invalid grant integrated login statement.
This has been fixed.
================(Build #2754 - Engineering Case #803541)================
If an application attempted to upgrade a case sensitive database, then the
request would have been failed with a 'proc_id' not found error. This problem
has been fixed.
================(Build #2233 - Engineering Case #794468)================
If a 16.0.0 server from build 2031 or later loaded the dbrsa16 library from
a build prior to 2031, the server could have crashed. This has been fixed.
================(Build #2207 - Engineering Case #803114)================
Under rare circumstances, the server could have crashed when executing a
query with a window function. This has been fixed.
================(Build #2203 - Engineering Case #802767)================
For non-TDS clients, parameters can be used in a batch if the parameters
are confined to a single statement. However, if the following batch had been
prepared and executed, a "Communication error" occurred.
BEGIN
DECLARE arg1, arg2 VARCHAR(255);
SELECT ?,? INTO arg1, arg2;
SELECT arg1, arg2;
END
This problem has been fixed. If the argument values are "Hello"
and "there", the result set contains two columns with the values
"Hello" and "there".
================(Build #2193 - Engineering Case #802806)================
The server would have incorrectly returned the error SQLE_OMNI_REMOTE_ERROR
if the REGEXP search condition was used for proxy tables. This has been
fixed.
================(Build #2187 - Engineering Case #802689)================
Under very rare circumstances, the server may have failed with assertion
101417 - "Cross database page access", assertion 200130 - "Invalid
page
found in index", or others, during database recovery or while applying
changes as mirror server. The problem only occurred with DDL operations
that used parallel query execution. This has been fixed. The problem can
be avoided by disabling parallel query execution for group PUBLIC
(set option PUBLIC.max_query_tasks=1).
================(Build #2183 - Engineering Case #802688)================
Server may crash when calling sa_split_list procedure. This has been fixed.
================(Build #2182 - Engineering Case #802672)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.2j.
================(Build #2179 - Engineering Case #802533)================
When calling a web service using the SoapUI tool, the message "400 Bad
Request" error is returned.
This is caused by the presence of a CDATA section in a parameter value
(<![CDATA[ xml-string ]]>). CDATA can be used to imbed an XML string
into an XML structure so that it is not parsed as part of the overall XML
structure.
For example, suppose the following SOAP request was sent to the database
server.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:fix="http://url.sample.com">
<soapenv:Header/>
<soapenv:Body>
<fix:authenticate>
<fix:ac_XML><![CDATA[<auth><uid>DBA</uid><pwd>sql</pwd></auth>]]></fix:ac_XML>
</fix:authenticate>
</soapenv:Body>
</soapenv:Envelope>
When the database server SOAP parser encounters the CDATA section, it returns
an error.
This problem has been fixed. The server now treats the CDATA string as plain
text.
================(Build #2177 - Engineering Case #802473)================
A server might crash on a system that has some CPUs offline. This has been
fixed. A work-around is to bind the server to the first set of CPUs that
are online beginning with CPU 0.
================(Build #2177 - Engineering Case #802464)================
When trying to call a remote function in a SQL Anywhere database that returns
a VACHAR result, the error " Count field incorrect" is returned.
For example, suppose the remote SQL Anywhere database server defines the
following function.
CREATE OR REPLACE FUNCTION DBA.TestFunction( IN arg1 CHAR(255) )
RETURNS VARCHAR(32767)
BEGIN
RETURN 'Good day';
END;
The local database defines a remote server and a cover function to call
the remote function, and then calls the remote function as follows:
CREATE SERVER rmt
CLASS 'SAODBC'
USING 'DRIVER=SQL Anywhere 17;DSN=Test17;Server=demo17;UID=DBA;PWD=sql';
CREATE OR REPLACE FUNCTION TestFunction( IN arg1 CHAR(255) )
RETURNS VARCHAR(32767)
AT 'rmt..DBA.TestFunction';
SELECT TestFunction( 'Hello' );
An error was returned when the SELECT statement was executed.
This problem has been fixed. In the example above, the SELECT statement
now returns the expected VARCHAR result.
================(Build #2176 - Engineering Case #802462)================
When dbtran printed a CONNECT operation from the transaction log, the date
associated with the CONNECT operation would have a year that was 1600 years
in the future. This has now been fixed.
================(Build #2165 - Engineering Case #801919)================
If communication compression was used with packet sizes larger than around
32K, the client or server could have crashed. This has now been fixed.
================(Build #2161 - Engineering Case #798705)================
Under exceptional rare circumstances, the server may have returned an incorrect
result set if all the following conditions were true:
- the statement contained user defined functions or stored procedure
- the statement was part of a function, procedure, event, or batch
- parallel query execution was performed
- the parallel subtree of the query plan referenced local variables or
function/procedure arguments
This has been fixed.
A workaround of the problem is to set the option Max_query_task =1.
================(Build #2154 - Engineering Case #801492)================
Under very rare circumstances, the server may have returned an incorrect
result set if local SQL variables were used with parallel query execution.
This has been fixed.
To work around the problem set the option Max_query_tasks = 1.
================(Build #2147 - Engineering Case #801308)================
Under very rare circumstances, the server may have crashed if miscellaneous
SQL functions were used in a parallel query execution. This has been fixed.
To work around the problem set the option Max_query_task = 1.
================(Build #2138 - Engineering Case #801195)================
A call to xp_startsmtp or xp_sendmail could have caused the server to hang
indefinitely. If the server disconnected from the SMTP server and then reconnected,
and the SMTP server stopped responding at the wrong time, a hang could have
resulted. This has been fixed.
================(Build #2138 - Engineering Case #801152)================
The server could have crashed in the spatial library in certain out-of-memory
conditions. This has been fixed.
================(Build #2132 - Engineering Case #801030)================
In some cases, a secure web procedure call could have been very slow or timed
out. This has been fixed.
================(Build #2129 - Engineering Case #801026)================
Additional changes were made to the fixes for Engineering case 800808 to
ensure the server shuts down cleanly after a failed xp_sendmail() call has
occurred.
================(Build #2129 - Engineering Case #800969)================
During index scans, the server could have failed the following assertions:
- 101412 “Page number on page does not match page requested”
- 200505 “Checksum failure on page x”
This has been fixed.
================(Build #2125 - Engineering Case #807981)================
If the DBA user is modified to no longer have the SYS_AUTH_RESOURCE_ROLE
granted and the database is subsequently unloaded and reloaded, then the
DBA user in the reloaded database will incorrectly have SYS_AUTH_RESOURCE_ROLE
re-granted. This has been fixed.
================(Build #2125 - Engineering Case #800808)================
The server could have crashed if an application used the system procedure
xp_sendmail with a message body that was greater than 256 bytes in length.
This problem was introduced by the fix for Engineering case 793866. The crash
has now been fixed.
================(Build #2119 - Engineering Case #800705)================
When the source or destination path argument for a file or directory functions
like sp_move_directory, sp_copy_directory, or sp_move_file, contained a symbolic
link (SYMLINKD), the function may have failed.
Consider the following examples where “sqlany” and “sqlany17” are symbolic
links for c:\sa17 and c:\sa17.1 respectively (both directories exist):
SELECT sp_copy_directory('c:\\sqlany', 'c:\\temp\\sa17');
The above statement would have returned the error “c:\sqlany is not a directory”.
SELECT sp_copy_directory('c:\\temp\\sa17', 'c:\\sqlany17');
The above statement would have returned the error “Unable to create directory
c:\sqlany17”.
If a junction was used instead, there were no errors. This problem has been
fixed.
================(Build #2112 - Engineering Case #800426)================
Under some circumstances, the LIST function may have caused the server's
temp file to grow to a large size. This has been fixed.
================(Build #2097 - Engineering Case #800115)================
If a user had been granted the SYS_AUTH_SA_ROLE and/or the SYS_AUTH_SSO_ROLE,
those role grants would have been lost if the database was unloaded and then
reloaded. This problem has now been fixed.
================(Build #2088 - Engineering Case #799799)================
The REFRESH MATERIALIZED VIEW statement may have failed with the SQL error
"Run time SQL error -- ???" because it required a checkpoint, but
the checkpoint could not complete due to another operation that was running
concurrently. For this situation the server will now return the new SQL error
"Operation failed - could not complete checkpoint".
================(Build #2086 - Engineering Case #799118)================
The server may have incorrectly returned the error SQLSTATE_BAD_RECURSIVE_COLUMN_CONVERSION
if a recursive select statement used numeric expressions that did not have
the current default precision and scale. This has been fixed.
================(Build #2084 - Engineering Case #799117)================
Under very rare circumstances, the server may have crashed when executing
an recursive query. This has been fixed
================(Build #2083 - Engineering Case #799484)================
The server incorrectly evaluated predicates of the form NULLIF( expr_1, expr_2)
IS NOT NULL to false if all of the following conditions were true:
- expr_1 was a not-nullable expression (e.g. a not null column)
- expr_2 evaluated to NULL
- expr_2 was known at open time of the query (e.g. a variable, host variable
or the constant NULL). This has been fixed.
================(Build #2075 - Engineering Case #799495)================
Under rare circumstances, expensive statement logging, or statement performance
in SQL Anywhere 17, could have caused a client connection to incorrectly
return an error. This has been fixed.
================(Build #2074 - Engineering Case #799462)================
Previously, the SQL Anywhere database server integrated login support searched
for a user in the Global Groups on the domain controller (identified by the
integrated_server_name server option) and Local Groups on the database server
computer. Now, it also searches Local Groups on domain controller.
For clarification, Windows will only return the names of global groups in
which the user is a direct member, or the names of local groups containing
global groups in which the user is a direct member.
If user userA is listed in global group groupB which is, in turn, listed
in global group groupC, then only groupB is returned. Global group groupC
is not returned even though it contains global group groupB.
If a local group localD contains groupB, then userA is located by indirection
in localD.
If a local group localD contains groupC, then userA is not located by indirection
in localD.
================(Build #2073 - Engineering Case #793866)================
A call to the system procedures xp_startsmtp or xp_sendmail could have caused
the database server to hang indefinitely if the SMTP server was not well-behaved.
This has been fixed.
================(Build #2058 - Engineering Case #798913)================
If a batched insert failed due to a 'duplicate primary key', 'column cannot
be NULL' or some other error, then the ODBC driver would have incorrectly
stopped processing the batch and returned the error to the application. This
problem has now been fixed and the ODBC driver will now attempt to process
all of the rows in the batched insert. The driver will return SQL_SUCESS
if all rows got inserted successfully and SQL_ERROR if one or more of the
rows were not inserted.
================(Build #2057 - Engineering Case #798912)================
Under very rare circumstances, the server may have crashed with "cache
page allocation" fatal error. This has now been corrected.
================(Build #2038 - Engineering Case #797805)================
The server could have deadlocked or hung if a dbspace was being extended
at the same time as a user-defined event was being loaded or reloaded. This
problem has been fixed.
================(Build #2023 - Engineering Case #784718)================
In very rare cases, in a parallel work load that performs rollbacks that
target a unique index, a server may have failed assertion 200112 - "Failed
to undo index delete". This has been fixed. Note, a database upgrade
is required to implement this fix.
================(Build #2021 - Engineering Case #797239)================
In very rare circumstances, requesting the stack_trace via sa_stack_trace()
of another connection could have caused the server to crash. This has been
fixed.
================(Build #2019 - Engineering Case #797233)================
A query containing a GROUPING function in the HAVING clause, that did not
appear elsewhere in the query, could have incorrectly returned a syntax error.
This has been fixed.
Note, a workaround is to include the expression containing the GROUPING
function in the select list.
================(Build #2019 - Engineering Case #797161)================
If the database server was shut down while hosting HTTP or HTTPS connections,
it could have crashed. This was more likely with the personal server than
the network server. This has been fixed.
================(Build #2000 - Engineering Case #797290)================
If the “quoted_identifier” option was set to ‘off’, upgrading the database
using the Database Upgrade utility (dbupgrad) or the “alter database upgrade”
statement would have failed with a syntax error. This has been fixed.
================(Build #2000 - Engineering Case #797289)================
If the list of CC or BCC recipients supplied to xp_sendmail was created in
SQL using string functions or concatenation, it was possible for the recipient
lists to be ignored. This has been fixed.
================(Build #2000 - Engineering Case #797285)================
Some error messages would have contained ‘???’ when raised. This has been
corrected.
Some occurrences of COLUMN_NOT_FOUND have been modified to contain additional
information and will now give COLUMN_NOT_FOUND_IN_TABLE.
Calling sp_read_db_pages() could have raised FEATURE_REQUIRES_UPGRADE. This
has been corrected to now raise READ_DB_PAGES_CACHE_TOO_SMALL.
================(Build #2000 - Engineering Case #796892)================
Under very rare circumstances, the server could have crashed when using the
Cockpit. This has been fixed.
================(Build #2000 - Engineering Case #790498)================
On 64-bit systems, preparing or executing a statement in the Javascript external
environment may have given an error, or caused the node executable to crash.
This has been fixed
================(Build #2000 - Engineering Case #785052)================
The Interactive SQL utility and SQL Central allow editing table data without
writing an explicit INSERT, UPDATE, or DELETE statement. Manipulating a row
which contained an NCHAR/NVARCHAR/LONGNVARCHAR column could have failed if
its value contained characters which could not be represented in the database's
CHAR character set. This has been fixed.
Here's specifically what didn't work:
- When adding a row, foreign characters in NCHAR columns could be converted
to escape characters (0x1A).
- If an NCHAR column was part of a table's primary key, the row could not
be deleted if the column's value contained foreign characters.
- When updating a row with foreign characters in an NCHAR column, a message
saying that the row had been updated was returned, but could not be refreshed.
================(Build #2000 - Engineering Case #783537)================
In specific conditions where “internal use only” features were used, it was
possible that parallel execution plans were not considered when they should
have been. This lead to execution plans that did not use parallelism. This
has been fixed.
================(Build #1758 - Engineering Case #803976)================
Under rare circumstances, the server had been crashed or returned an assertion
error 109523 when sending an SMTP email using xp_sendmail. This has been
fixed.
================(Build #1700 - Engineering Case #791283)================
Under rare circumstance, the server could have crashed when executing a statement
involving a stored procedure or user defined function defined with SQL SECURITY
INVOKER. This has been fixed.
================(Build #1493 - Engineering Case #798668)================
In some cases, creating a circular string could have resulted in the server
entering an endless loop. This has been corrected.
================(Build #1481 - Engineering Case #798158)================
Calling the system procedure sa_refresh_text_index(), in the presence of
text indexes with names that could only be used as quoted identifiers, could
have caused an error to be returned. This has been fixed.
Note, this issue could be observed during dbunload –g if text index names
contained multibyte characters. A workaround is to manually refresh the text
indexes.
================(Build #1479 - Engineering Case #798114)================
Spatial methods that generate SVG use a viewBox element whose size is computed
from the input geometry’s bounding box plus a small fixed constant. Some
browsers (notably Microsoft IE) have problems scaling SVG elements with small
viewBoxes and so may not have rendered the generated SVG correctly. This
change adds two new format parameter names, “MinViewBoxWidth” and “MinViewBoxHeight”,
to the format parameters accepted by SVG-generating methods (ST_AsSVG, ST_AsSVGAggr,
ST_AsXML, ST_AsText). These parameters permit specifying numeric values
for the minimum viewBox width and height. If left unspecified, the minimum
viewBox width and height defaulted to 0.0002, which was the previous behaviour
before this change.
For example, the expression “new ST_Point(0,0).ST_AsSVG(‘MinViewBoxWidth=0.3;MinViewBoxHeight=0.2’)”
generates SVG in which the viewBox enclosing the point has width 0.3 and
height 0.2.
================(Build #1477 - Engineering Case #797911)================
The server would have given a 'Table not found' error if an application attempted
to create a HANA proxy table and the actual HANA table had a mixed case owner,
schema or table name. This problem has now been fixed.
Note that with this change the application must now ensure that the proper
case is used when specifying owner, schema and table name in the AT clause
of the CREATE EXISTING TABLE statement.
================(Build #1477 - Engineering Case #797907)================
In rare cases, attempting to call the system procedure sp_objectpermission()
could have led to a server hang. This problem has been fixed.
================(Build #1477 - Engineering Case #797902)================
When processing a statement that returned many rows with a very low per-row
cost, it was possible for the total time to be higher than it should have
been. This has been fixed.
Measured slowdown was about 250 nanoseconds per row returned to the client.
================(Build #1474 - Engineering Case #797752)================
Inserting a round-Earth geometry could have failed with "Error parsing
geometry internal serialization” (SQLCODE –1415). This has been fixed.
================(Build #1473 - Engineering Case #797716)================
A busy server that had statement profiling enabled, might have crashed while
logging the plans or the text of expensive queries. This has been fixed.
Note, this problem only affected Windows and Linux systems.
================(Build #1470 - Engineering Case #797560)================
If a computer running the database server had at least 128 CPUs, connections
may have reported incorrect statistics. This has been fixed.
================(Build #1470 - Engineering Case #797545)================
Under very rare circumstances, the server may have returned an incorrect
result set or a syntax error for queries with a PIVOT clause. This has been
fixed
================(Build #1466 - Engineering Case #797401)================
Under rare circumstances, the database server could have crashed while updating
the column statistics at the end of a DML statement. This has been fixed.
================(Build #1466 - Engineering Case #797365)================
When attempting to use the Upgrade Database wizard to change a database's
security model from definer to invoker, the security model would have remained
unchanged. This has been fixed.
================(Build #1466 - Engineering Case #782470)================
Under very rare circumstances, it may have taken a long time to cancel a
complex query during optimization. This has been fixed
================(Build #1462 - Engineering Case #797145)================
Under very rare circumstances, the server would have crashed if the GROUP
BY clause of a query contained outer references. This has been fixed
================(Build #1459 - Engineering Case #797001)================
The ROW constructor did not verify the uniqueness of the specified field
names. This has been fixed
================(Build #1457 - Engineering Case #796847)================
The SELECT INTO statement was incorrectly creating a table with FLOAT columns
for DOUBLE columns of the select's result set. This has been fixed
================(Build #1457 - Engineering Case #796705)================
An authenticated server may have given authentication errors to connections,
even though the authentication string was a valid string provided by SAP.
This has been fixed.
================(Build #1452 - Engineering Case #796738)================
The server may have returned an incorrect result set if a query had an inner
query block with a GROUP BY CUBE or ROLLUP and an outer query block had predicates
in the WHERE clause. This has been fixed.
================(Build #1449 - Engineering Case #796579)================
It was possible for the server to crash when sp_parse_json was executed using
input that contained mismatched data types, where one type was a null and
the other type was an object or an array. For example, the following would
have crashed the server: [ {a: null}, {a: {b:1} } ]. This has now been fixed.
A workaround is to ensure that all objects within an array have exactly
the same data type. In the previous example, it could be fixed by changing
the input to: [ {a: {b:null} }, {a: {b:1} } ].
================(Build #1444 - Engineering Case #791217)================
When connected to a version 17 server, attempting to restore archive backups
created with versions 16 or older would have returned the error “Backup file
format is invalid”. This has been fixed.
================(Build #1440 - Engineering Case #796262)================
On Unix systems, if a server was started with the –ud option, and that server
attempted to start a database file that was already running on another server
(with a different name), the new server may have crashed on shutdown. The
reported error message also did not correctly indicate that the database
file was in use. This has been fixed.
================(Build #1435 - Engineering Case #796139)================
Under very rare circumstances, the SQL Anywhere server could have crashed
when executing a complex query with large number of threads executing in
parallel. This problem has now been fixed.
================(Build #1434 - Engineering Case #796085)================
When calling xp_startsmtp with a trusted_certificate file, and specifying
just the filename (instead of “file=<filename>”), would have cause
xp_startsmtp to return error code 1. This has been fixed.
================(Build #1434 - Engineering Case #796081)================
On systems running Microsoft Windows, the server may have crashed on startup
when attempting to obtain disk drive parameters if the disk driver did not
properly implement the IOCTL_STORAGE_QUERY_PROPERTY correctly. When successful,
the information returned by this system call can be seen using the following
SQL query.
SELECT DB_EXTENDED_PROPERTY( 'DriveModel' );
This problem has been fixed. If the disk drive parameters cannot be determined,
the drive model will now be “Unknown”.
================(Build #1428 - Engineering Case #795922)================
If a web procedure URI began with “https_fips://” indicating that HTTPS should
be used with the FIPS-certified libraries, calling the procedure would result
in SQLCODE -980, “The URI ‘<uri>’ is invalid”. This has been fixed.
================(Build #1428 - Engineering Case #795917)================
Certain assertion numbers could have been raised in more than one situation.
This has been fixed so that assertion numbers are now unique.
================(Build #1426 - Engineering Case #795751)================
In very rare cases, cancelling a statement that processed an XML document
could have taken a long time. This has been fixed.
================(Build #1424 - Engineering Case #795599)================
Under very rare circumstances, the server may have crashed during a database
cleaner run if there had been tables dropped and views created shortly before.
This has been fixed.
================(Build #1421 - Engineering Case #795609)================
The SQL functions NUMBER(*) and RAND() may have returned duplicate values
if they were executed below an Exchange query plan node of a parallel query
execution. This has been fixed.
================(Build #1418 - Engineering Case #795546)================
If a server was using the -zoc switch to log web procedure calls, and a web
procedure that used chunked encoding was called, the server could have crashed.
This has been fixed.
================(Build #1418 - Engineering Case #794511)================
Under very rare circumstances, the server may have crashed while receiving
host variables from a TDS based connection if the receiving TDS token stream
violated the TDS protocol definition. This has been fixed. The server will
now return a SQLSTATE_COMMUNICATIONS_ERROR error in this situation.
================(Build #1410 - Engineering Case #795349)================
If a database was created or upgraded with a version 10, 11, or 12 server,
but not upgraded any further, and contained a table or view called 'SYSROLEGRANTS',
an attempt to upgrade that database to version 17 would have failed. This
has been fixed.
================(Build #1410 - Engineering Case #795335)================
In very rare cases, the server could have crashed while closing a connection
that made external environment calls to a connection scoped external environment.
The problem would show up if the external environment had open cursors at
the time the connection was closed. The problem has now been fixed
================(Build #1405 - Engineering Case #795198)================
When using a JSON data structure containing empty arrays (represented in
a string as ‘[]) as input to the sp_parse_json procedure, it was possible
for the server to crash. This has been fixed.
================(Build #1401 - Engineering Case #795027)================
In rare cases, using the new PKI routines to verify or sign messages, or
encrypt or decrypt data using RSA, could have caused the server to crash.
This problem has been fixed.
================(Build #1393 - Engineering Case #794728)================
If the wrong identity file password was supplied to the database server,
the error message displayed by the server would have been similar to “Error
parsing certificate file, error code=0x06065064”. This has been fixed.
================(Build #1393 - Engineering Case #794593)================
Incorrect results could have been returned if SQL SECURITY INVOKER user defined
function was invoked multiple times in a single statement, with at least
two calls being made by different users. For example, the issue would have
occurred if the same UDF was invoked from a view referenced in a query, and
in the SELECT list of the query directly. This has been fixed.
================(Build #1387 - Engineering Case #794531)================
When creating a foreign key with an ON DELETE SET DEFAULT or ON UPDATE SET
DEFAULT action on a column with no default value, the error message returned
by the server would have failed to reference the table name: “Constraint
'<column>' violated: Invalid value for column '<table>’ in table
'???'”. This has been fixed so that the table name is now referenced.
================(Build #1384 - Engineering Case #794343)================
The server could have crashed executing a spatial query in a low memory situation.
This has been fixed.
================(Build #1364 - Engineering Case #793880)================
In rare case, cached plans for client statements that used host variables
of type TIMESTAMP_STRUCT within expressions could potentially have returned
an incorrect result. This has been fixed. A workaround is to disable plan
caching by setting option max_plans_cached=0.
================(Build #1361 - Engineering Case #793824)================
Under very rare circumstances, the server may have crash when using a RANK
aggregate function. This has been fixed.
================(Build #1359 - Engineering Case #794878)================
The dbmanageetd tool can be used to read and write .etd files. When used
to write files in ETD format, some trace event records were written improperly,
generating files which could not be read. This has been fixed.
================(Build #1358 - Engineering Case #793740)================
Under extremely rare circumstances, the server could have crashed or hung
when creating an event. This has been fixed.
================(Build #1358 - Engineering Case #793674)================
When processing a statement that contained a subselect expression where the
select-list item used a LIST or COUNT aggregate, it was possible for the
statement to fail assertion 106901 - "Expression value unexpectedly
NULL in write". This has now been fixed.
================(Build #1356 - Engineering Case #793457)================
When the time_zone option was set, the value of the @@dbts global variable
was returned in the computer’s time zone, not that of the database. This
has been fixed.
================(Build #1353 - Engineering Case #793370)================
The version of OpenLDAP used by the SQL Anywhere server and client libraries
has been upgraded to 2.4.43.
================(Build #1352 - Engineering Case #792816)================
The server may have failed the non-fatal assertion 102604 - "Error building
sub-select" if a query contained a distinct that could have been eliminated,
the query cursor was not declared with read-only, and there was a publication
with an subselect in the Subscribe-by
clause. This has been fixed.
================(Build #1346 - Engineering Case #792898)================
The server may have crashed or failed assertion 109512 - "Freeing already-freed
memory" during a DROP ROLE or DROP USER statement if there were multiple
extended grants (e.g. SET USER and CHANGE PASSWORD). This has been fixed.
Note, a workaround is to revoke extended grants before dropping a Role or
User.
================(Build #1346 - Engineering Case #792313)================
The server can perform a fast TRUNCATE TABLE if the table is referenced by
foreign key tables and all the foreign key tables are empty. Under some circumstances
a fast truncate was not being executed. This has been fixed.
================(Build #1345 - Engineering Case #793009)================
In some combinations of logs and a backed up database, the server did not
realize that the database did not need recovery and failed to start the database.
A “Log not found” error was thrown instead. This has been fixed.
================(Build #1345 - Engineering Case #792925)================
If an execution plan executed a subquery (a subselect expression, EXISTS,
or ANY/ALL) many times, and the subquery was very cheap, then the overall
execution time of the query was higher than it could have been. This has
been fixed.
================(Build #1340 - Engineering Case #792440)================
The value of a database variable was being cached too eagerly in simple cached
DML statements. This has been fixed.
Note, that in order to observe the issue, the database variable had to have
changed frequently.
================(Build #1339 - Engineering Case #792644)================
Providing a min_ticks parameter to the system function sp_property_history()
may have incorrectly caused no rows to be returned when the server was running
on Windows systems.
For example:
SELECT MAX( ticks ) INTO @tv FROM sp_property_history ( 'ActiveReq' );
-- wait a few seconds
SELECT * FROM sp_property_history ( 'ActiveReq', @tv );
-- no rows
SELECT * FROM sp_property_history ( 'ActiveReq', NULL ) WHERE ticks >=
@tv;
name,ticks,time_recorded,time_delta,value,value_delta
'ActiveReq',30460700,'2015-11-22 16:55:26.028-05:00',990,0.0,0.0
'ActiveReq',30461700,'2015-11-22 16:55:27.028-05:00',1000,0.0,0.0
'ActiveReq',30462700,'2015-11-22 16:55:28.028-05:00',1000,0.0,0.0
'ActiveReq',30463710,'2015-11-22 16:55:29.038-05:00',1010,0.0,0.0
...
This has been fixed. As a temporary workaround, divide ticks by 10 on Windows:
SELECT * FROM sp_property_history ( 'ActiveReq', @tv/10.0 );
================(Build #1334 - Engineering Case #792549)================
Under rare circumstances, the server could have crashed when getting the
procedure stack trace. This has been fixed.
================(Build #1334 - Engineering Case #792498)================
Under very rare circumstances, the server may have failed assertion 104904:
"latch count not 0 at end of request", or others, after executing
a REORGANIZE TABLE statement with PRIMARY KEY, FOREIGN KEY or INDEX clause,
or after shrinking an index. This has now been fixed.
================(Build #1334 - Engineering Case #792266)================
The UPDATE statement [SQL Remote] is executed by the Message Agent of SQL
Remote to determine existing and new recipients of the rows in a table:
UPDATE table-name
PUBLICATION publication-name
{ SUBSCRIBE BY subscription-expression |
OLD SUBSCRIBE BY old-subscription-expression
NEW SUBSCRIBE BY new-subscription-expression }
WHERE search-condition
expression : value | subquery
The statement does not modify any of the rows in the database, but puts
records in the transaction log to indicate movement of rows from or to a
recipient.
Since this type of UPDATE statement does not modify any rows, it should
not execute any BEFORE or AFTER triggers. Before this change it improperly
called BEFORE UPDATE triggers on the target table, leading to wasted work
in some cases. This has been fixed, BEFORE UPDATE triggers are no long called
for this type of statement.
================(Build #1333 - Engineering Case #792271)================
Server may have crashed if a user interrupted starting the SQL Anywhere Cockpit.
This has been fixed.
================(Build #1328 - Engineering Case #792263)================
In some situations, creating an index on a very large table could have caused
the server to appear to be hung. This condition did go away once the index
was created. The is now been fixed.
================(Build #1328 - Engineering Case #792227)================
Some valid round-earth geometries could have failed to input properly, either
giving an error, failing an assertion, or causing a server crash. This has
been fixed.
================(Build #1327 - Engineering Case #792221)================
If the server encounters a fatal database error it then writes a minidump
file. During this process the server may have overwritten this minidump file,
or created another minidump file due to a crash when freeing static data.
This has been fixed.
================(Build #1325 - Engineering Case #791615)================
Temporary file names for the server and various utilites were generated using
a standard library function that may have produced somewhat predictable file
names. These predictable temporary file names could have been exploited in
various ways. Collisions between processes or threads were also possible
and could have resulted in undesirable behaviour. This has been fixed.
A workaround that mitigates most of the issues is to set SATMP to a location
that is only writable by the engine and other trusted users.
================(Build #1320 - Engineering Case #792037)================
In very rare cases, the server could have crashed dereferencing a bad pointer
or connections could have failed to unblock. This has been fixed.
================(Build #1318 - Engineering Case #791896)================
In very rare cases the server may have failed assertion 201501: “Page X:Y
for requested record not a table page”. This has been fixed.
================(Build #1313 - Engineering Case #791754)================
In very rare timing dependent cases, recording event tracing could have resulted
in the server crashing. This has now been fixed.
================(Build #1310 - Engineering Case #791667)================
The function ST_PointOnSurfac()e requires an ST_Polygon or ST_MultiSurface
as input. Similarly, the function ST_IsRing() requires an ST_LineString as
input. Using these functions on valid geometry types may have resulted in
an error indicating that the geometry type was incorrect. This has been fixed.
================(Build #1310 - Engineering Case #791665)================
When creating a LineString with a round-earth SRS, points that were 180 degrees
longitude apart were rejected as being nearly antipodal, even if they were
physically close together. For example, the following geometry would have
failed to load, even though it is a relatively short line: LineString (-180
-84, 0 -90). This has been fixed.
================(Build #1310 - Engineering Case #790722)================
Under very rare circumstances, the server may have crashed or failed an assertion
"Assertion failed: 109512 Freeing already-freed memory". This has
been fixed.
To work around the problem, plan caching can be turned off (option Max_plans_cached
= 0).
================(Build #1308 - Engineering Case #791554)================
Zero-length LineStrings were not handled properly by the set operations,
ST_IsSimple, and ST_Buffer. Passing such a LineString to ST_Buffer may have
caused the server to fail an assertion. This has been fixed.
ST_IsSimple now returns TRUE if there are only two points in the LineString.
LineStrings containing more than two points that are also zero-length are
not considered to be ST_IsSimple.
Set operations now treat the zero-length LineString as a single point.
Zero-length segments within a given LineString whose overall length is non-zero
are ignored.
================(Build #1304 - Engineering Case #791165)================
In some situations, when a table had hundreds of Foreign Key constraints
defined an insert in that table may have caused a server crash. Behavior
has now been changed to throw an error instead.
================(Build #1297 - Engineering Case #788462)================
The server may have incorrectly returned the error "Function or column
reference to 'rowid' must also appear in a GROUP BY", when a select
with aggregation had a correlated subquery in its select list and the subquery
contains an outer join that returned constants from the null-supplying side.
For example:
select ( select sum(T2.b2)
from T2 left outer join ( select 1 as x from T3 )
V3 on 1=1
where T1.a1 = T2.a2
) as Z,
count(*)
from T1
group by T1.a1
This query below has a main query block with GROUP BY T1.a1, a subquery
with alias Z, and an outer reference to the subquery using T1.a1. The null-supplying
side of the outer join V3 returns a constant "1 as x".
This has been fixed.
================(Build #1296 - Engineering Case #780893)================
Under rare circumstances, the server may have returned the error "Invalid
use of an aggregate function" when a query contained a proxy table and
an aliased flattenable subquery with grouping in the select list of a query
block. For example: The below query has a subquery with an alias name "s1"
and would have returned the above error:
select *
from ( select ( select sum(V1.a1) x from T1 V1 ) as s1
from T1 V2
) V0,
T2_proxy
This has now been fixed.
================(Build #1288 - Engineering Case #790670)================
Accessing a proxy table mapped to a remote Oracle table which had special
characters in its name (such as ‘/’, ‘$’, ...) was reporting syntax errors
such as ORA-0903 and ORA-0933. The problem was due to the table identifiers
not being delimited properly, which has now been fixed.
================(Build #1286 - Engineering Case #790600)================
In some cases, UPDATE statements that included SET <variable> = <expression>
could have failed to evaluate the expression for the variable, setting it
to NULL instead. This has been fixed.
A workaround is to issue a separate query before issuing the update. For
example,
UPDATE T SET @var = T.x, T.y = 4 WHERE T.z=1
becomes
SELECT T.x INTO @var WHERE T.z=1;
UPDATE T SET T.y = 4 WHERE T.z = 1;
================(Build #1286 - Engineering Case #790589)================
In very rare situations, a server could have crashed if an application that
made a connection-scoped external environment call closed the connection
while the server machine was under heavy load. This problem has now been
fixed.
================(Build #1275 - Engineering Case #790149)================
In extremely rare circumstances, it was possible for the server to crash
during shared memory communication. This has been fixed.
================(Build #1271 - Engineering Case #789369)================
Simple-encrypted version 10 databases would have failed to start on a version
17 server. This has been fixed.
================(Build #1268 - Engineering Case #789852)================
If a server was running on a Unix system with multiple network adapters and
the MyIP parameter was used with a link-local IPv6 address (i.e. one that
begins with “fe80::”), clients may not have been able to find the server
using TCP/IP. This has been fixed.
================(Build #1267 - Engineering Case #789740)================
The server may have returned a sequence value for CURRVAL even if NEXTVAL
was never called in the current connection for this sequence. This has been
fixed.
================(Build #1267 - Engineering Case #786626)================
The server did not allow the use of sequence.currval as a default column
value. This has now been implemented.
================(Build #1259 - Engineering Case #789267)================
The changes for Engineering case 786183 did not completely resolve a problem
where domain users explicitly present in the local group were no longer being
located. This has been corrected so that local users or domain users that
are members of a local group, as well as domain users who are indirectly
members of a local group (by virtue of being a member of a global group placed
within a local group) are now found and the group name is checked for an
integrated login mapping.
================(Build #1246 - Engineering Case #787344)================
HTTP web services that invoked a procedure call may not have returned the
correct result set if the result set description of the procedure can change
in each procedure call. This has been fixed.
================(Build #1245 - Engineering Case #788586)================
Several of the secured feature system procedures like sp_create_secure_feature_key(),
sp_alter_secure_feature_key(), etc. have a parameter named auth_key. The
documented name of the parameter for the sp_use_secure_feature_key() is auth_key
as well, however the actual implementation used a different parameter name.
This has been corrected. The parameter name is now consistent with the other
secured feature system procedures and the documentation.
================(Build #1245 - Engineering Case #788580)================
If an unusual error occured while executing the system procedure sp_generate_key_pair(),
any subsequent calls to sp_generate_key_pair() on any connection could have
caused that connection to hang. This has been fixed.
================(Build #1245 - Engineering Case #788560)================
When processing a statement that contained a subselect expression with the
select list item being either the LIST or COUNT aggregate and a GROUP BY
clause that contained only constant expressions or outer references to outer
query expressions, it was possible for the statement to fail with the error:
Assertion failed: 106901 "Expression value unexpectedly NULL in
write"
This has now been corrected.
================(Build #1243 - Engineering Case #788457)================
Several SQL statements for creating objects accepted both the “OR REPLACE”
and “IF NOT EXISTS” clauses at the same time. This has been fixed so that
at most one of these two clauses can be used. The following SQL statements
were affected:
CREATE GLOBAL TEMPORARY TABLE (v17)
CREATE MUTEX (v17)
CREATE SEMAPHORE (v17)
CREATE SPATIAL REFERENCE SYSTEM
CREATE SPATIAL UNIT OF MEASURE
================(Build #1243 - Engineering Case #788412)================
If an application made a SQL SECURITY DEFINER procedure call which changed
the effective user to something other than the logged in user, and the procedure
subsequently made a remote data access request with that different effective
user, and if there was no externlogin for that effective user, then there
would have been some instances where the remote connection succeeded without
the required externlogin. This issue has now been fixed.
================(Build #1242 - Engineering Case #788528)================
When executing an UNLOAD statement with QUOTES ALL option specified, the
quotes in CHAR values were not escaped. This has been fixed.
Note, when the QUOTES ALL option is specified, only single quote (‘) and
double quotes (”) can be specified as the quote character.
================(Build #1242 - Engineering Case #788401)================
Several SQL statements for creating or altering objects would have accepted
some clauses more than once and silently ignored all but the last one. Others
would give an unhelpful error message like “Syntax error near ‘(end of line)’
on line 1”. This has been fixed so that duplicate clauses are no longer permitted
and will raise error code -1933 in the following statements:
CREATE/ALTER FUNCTION (web service)
CREATE/ALTER LDAP SERVER
CREATE/ALTER MIRROR SERVER
CREATE/ALTER ODATA PRODUCER
CREATE/ALTER PROCEDURE (web service)
CREATE/ALTER SERVICE
CREATE/ALTER SPATIAL REFERENCE SYSTEM
CREATE/ALTER SPATIAL UNIT OF MEASURE
CREATE/ALTER TIME ZONE
CREATE/ALTER USER
================(Build #1238 - Engineering Case #788300)================
Attempting to start or stop multiple OData Producers simultaneously could
have, in some cases, lead to one or more of the Producers failing to start
or stop. This problem has now been fixed.
================(Build #1238 - Engineering Case #788247)================
When running a statement with very complex expressions (for example in the
WHERE or SELECT clause), it was possible for the server to fail an assertion
or a crash when the statement was closed. The complexity of the expression
needed was related to the maximum cache size. This has been fixed.
================(Build #1238 - Engineering Case #786492)================
If a multi-threaded application instantiated separate DbmlsyncClient objects
on separate threads, it was possible for the application to have crashed
if the Init function was called concurrently on multiple threads. The SYNCHORNIZE
command in the SQL Anywhere database engine uses the Dbmlsync API, so concurrent
calls to the SYNCHRONIZE command on different connections could also result
in a crash of the database server. These issues have now been fixed.
================(Build #1237 - Engineering Case #788218)================
Attempting to unload a version 16 database that had sync publications defined
would have failed with a “column server_protocol not found” error. This problem
has now been fixed.
================(Build #1237 - Engineering Case #788197)================
When connecting to an authenticated server using SQL Anywhere tools such
as Interactive SQL or SQL Central, executing statements that would modify
the database would have failed with the error: "-98 Authentication violation".
This problem was introduced by changes made for Engineering case 785757 and
has now been fixed.
================(Build #1237 - Engineering Case #787878)================
Stopping an HTTP(S) listener by calling the system procedure sp_stop_listener()
while processing an HTTP request, could have crashed the server. This has
been fixed.
================(Build #1232 - Engineering Case #669578)================
When executing particular forms of complex queries with very large expressions,
it was possible for the server to fail a fatal assertion. This has been fixed
so that these statements now report one of the two following errors:
SYNTACTIC_LIMIT 54W01 -890 "Statement size or complexity exceeds
server limits"
DYNAMIC_MEMORY_LIMIT 54W19 -1899 "Statement requires too much memory
during query execution"
================(Build #1230 - Engineering Case #787950)================
If an application executed the following sequence:
- a remote procedure call using a different effective user than the current
logged in user, followed by
- a DROP REMOTE CONNECTION to drop the remote connection created above,
followed by
- a remote procedure call using a different effective user than the one
above
then there was a small chance the server would have crashed when the second
remote procedure call completed. This problem has now been fixed.
It should be noted that this problem can in rare cases manifests itself
when the SQL Anywhere Cockpit is used to change the Cockpit settings.
================(Build #1228 - Engineering Case #761650)================
The server may have issued an error, for example "Column <name>
not found", if an INSERT, UPDATE or DELETE statement on a local table
referenced a proxy table, and the changing table had a publication that referenced
in its publication WHERE clause additional tables.
This has been fixed.
================(Build #1222 - Engineering Case #787092)================
In certain rare scenarios, a cached plan for a statement that did not qualify
for simple bypass and which contained a host variable, could have returned
incorrect results or thrown the runtime assertion failure 106900 - "Expression
value unexpectedly NULL." This included statements where the host variable
was inserted automatically by statement parameterization.
Note that both plan caching for non-bypass client statements and automatic
statement parameterization were new features in version 17, and so earlier
server versions are unaffected.
In all known repros the same host variable expression either appears at
least twice in the statement or else is used in a comparison predicate against
a column that occurs in the join key of a hash join.
This has been fixed.
One workaround is to disable all plan caching by setting the connection
option max_plans_cached=0. If the incorrect behaviour is only observed on
statements where the host variable was inserted by automatic statement parameterization,
another workaround is to disable only statement parameterization by setting
connection option parameterization_level=’Off’.
================(Build #1221 - Engineering Case #787592)================
Certain sub and dynamic classes built using a 1.8 JDK could not been installed
in the database. This problem has now been fixed.
================(Build #1221 - Engineering Case #787529)================
When creating a foreign key, the schema of the primary table was locked in
exclusive mode. This meant that creating the foreign table failed if another
connection was using the primary table. Further, the entire range of the
primary table was always locked, leading to an error if any row of the primary
table had uncommitted updates.
This has been changed. The schema of the primary table is now locked in
shared mode instead of exclusive. The row range of the primary table is locked
only if there is at least one row in the foreign table.
With these changes, it is now possible to create foreign keys from empty
foreign tables even when there is another connection with uncommitted row
changes in the primary table.
================(Build #1221 - Engineering Case #787277)================
Under rare circumstances, the server could have crashed while tracing statements
with diagnostic tracing (application profiling). This has been fixed.
================(Build #1219 - Engineering Case #787419)================
Invoking a stored procedure that used a temporary table T (declared by invoker)
with different definitions of T would have returned an error. The restriction
was now been relaxed to allow some mismatch between the table definitions.
Note that this is not the recommended use. It is expected that a stored
procedure will be using the exact same definition of the temporary table
in all executions.
================(Build #1218 - Engineering Case #738277)================
The server may have crashed, or returned unexpected errors, if a SELECT from
DML referenced proxy or IQ tables. This has been fixed.
================(Build #1217 - Engineering Case #787345)================
The changes for Engineering case 783569 introduced the possibility of a server
crash when executing a Remote Data Access statement when an error was encountered
during creation of an underlying cursor. This crash has now been fixed.
================(Build #1217 - Engineering Case #787340)================
The server would not have started if a server name with spaces was entered
in the server startup dialog window. This has been fixed.
================(Build #1213 - Engineering Case #787105)================
Repeatedly executing INSERT statements with a VALUES clause containing two
or more rows could have caused a crash in memory constrained environments.
This has been fixed.
================(Build #1212 - Engineering Case #787014)================
Under very rare circumstances, the server could have returned an error, an
incorrect result, or entered an infinite loop, if a query contained Transact
SQL outer joins in subqueries that were part of a disjunctive clause. This
has been fixed.
================(Build #1210 - Engineering Case #785328)================
IF and CASE expressions can be optimized in some cases when used in search
conditions. These optimizations can remove unneeded subquery invocations
or identify new sargable predicates. In particular, IF expressions are generated
when a view V is used in the null-supplying side of an outer join and V contains
a column that is a constant.
The following changes have been made to provide better performance for queries:
1. If a subselect expression has a LIST or COUNT aggregate in the select
list and there is neither a GROUP BY nor a HAVING clause, then the subselect
expression cannot be NULL. If the expression is used in the SELECT list,
it will be described as not-NULL.
2. When considering a search condition of the form cond IS TRUE where
cond cannot be UNKNOWN, then simplify to cond.
3. When considering a search condition of the form cond IS FALSE where
cond cannot be UNKNOWN, then simplify to NOT cond.
4. When considering a search condition of the form cond IS UNKNOWN:
a. If cond cannot be UNKNOWN, simplify to FALSE
b. If cond is a comparison condition of the form c0 = c1 where one input
(say c0) cannot be NULL, then simplify to c1 IS NULL. Other comparison relations
(<,<=,>=,>,<>) are supported.
5. When considering expr IS NULL:
a. If expr is CAST( e1 AS type ) and the cast cannot introduce NULL,
simplify to e1 IS NULL
b. If expr cannot be NULL, simplify to FALSE
c. If expr is known to be the NULL value at open time, simplify to TRUE
d. If expr is IF pred THEN lhs ELSE rhs END IF, simplify according to
the rules described below.
6. When considering a comparison condition e1 = IF cond THEN lhs ELSE
rhs END IF, simplify it as described below. The IF expression may appear
on the left or right of the comparison, and all comparison relations are
supported.
The following table shows the simplified conditions generated for the following
condition:
IF pred THEN lhs ELSE rhs END IF IS NULL
The simplification is only performed in cases where lhs / rhs could not
generate an error or where they would necessarily be evaluated. The pred
condition must be either a comparison predicate or an IS NULL predicate.
Simplified Condition Notes
FALSE None of pred/lhs/rhs can be NULL
pred IS UNKNOWN lhs/rhs cannot be NULL
(pred IS UNKNOWN) OR (lhs IS NULL) lhs == rhs (special case)
(pred IS UNKNOWN) OR (pred and lhs IS NULL) rhs cannot be NULL
(pred IS UNKNOWN) OR (NOT pred AND rhs IS NULL) lhs cannot be NULL
pred pred cannot be UNKNOWN and lhs is known- at-open NULL and
RHS cannot be NULL
NOT pred pred cannot be UNKNOWN and rhs is known- at-open NULL
and LHS cannot be NULL
pred AND lhs IS NULL pred cannot be UNKNOWN and rhs cannot be NULL
NOT pred AND rhs IS NULL pred cannot be UNKNOWN and lhs cannot be NULL
lhs IS NULL pred cannot be UNKNOWN and rhs == lhs (special case)
The following table shows the simplified conditions generated for the following
condition:
e1 = IF cond THEN lhs ELSE rhs END IF
Simplified Condition Notes
cond AND e1 = lhs The RHS is known to be the NULL value at open
time
NOT cond AND e1 = rhs The LHS is known to be the NULL value at open
time
(cond AND e1 = lhs) OR (NOT cond AND e1 = rhs) The cond is either a comparison
condition or an IS NULL condition and lhs and rhs are either
a known value or a column expression.
================(Build #1207 - Engineering Case #786804)================
If an application fetched a result set containing an nvarchar(1024) column
from a remote server, then that column value would have been invalid if the
original value was exactly 1024 nchar characters in length. This problem
has now been fixed.
================(Build #1204 - Engineering Case #786755)================
Procedures and functions that contained at least one input parameter of ROW
or ARRAY type and the procedure/function body was a single, query may have
incorrectly reported the error “Correlation name not found”. This has been
fixed.
================(Build #1202 - Engineering Case #786610)================
Under very rare circumstances, the server could have crashed when accessing
a PUBLIC database variable. This has been fixed.
================(Build #1200 - Engineering Case #785327)================
When comparing values of type CHAR and NCHAR, SQL Anywhere uses inference
rules to determine the type in which the comparison should be performed.
Generally, if one value is based on a column reference and the other is not,
the comparison is performed in the type of the value containing the column
reference. If a view column (v) was defined as a string literal of type NCHAR
was used in a query where the same constant string was used elsewhere as
an expression (c), and the query had 100 or fewer constants, then a comparison
between a CHAR column and the constant literal (c) might have incorrectly
failed to use the CHAR type. This has been fixed.
Further, when a query contained two tables (say R and S) where one had a
CHAR column and the other an NCHAR column (say R.ch and S.nch) and both columns
were equated to the same constant, then the server could have improperly
inferred that the two columns are equal:
R.ch = 'A' AND S.nch = 'A' ==> R.ch = S.nch
This inference is not correct. This has been fixed and such conditions are
no longer improperly inferred.
================(Build #1200 - Engineering Case #785325)================
When inserting into a table, if the SELECT block contained the sa_rowgenerator
procedure, then a work table was used. This has been changed. The work table
is no longer generated unless other conditions require it.
================(Build #1200 - Engineering Case #785322)================
When estimating the cost of a join, the server considers any expensive predicates
that might be evaluated. For example, if there is a subquery predicate, it
will affect the cost of evaluating the join.
These expensive predicates were not always included in the cost of evaluating
equi-joins. This has been changed so these predicates are considered when
estimating the cost of a plan.
For a particular customer query affected by this issue, run time reduced
from 18,268 sec to 247.8 sec with this optimization.
================(Build #1200 - Engineering Case #785318)================
When using the Plan Viewer tool in dbisql, the "Detailed statistics"
executes the plan. In this mode, precise timing is not recorded for every
node in the plan in order to minimize the distortion introduced by timing.
Nevertheless, more information is available and after this change it is now
displayed.
Statistics now included for all plans that have been executed:
In the graphical plan, if the plan has been executed then every node has
at least the following in Subtree Statistics:
- Invocations (actual)
- RunTime (estimate)
- RowsReturned (estimate and actual)
If the plan has been executed, every table scan and index scan node has
the following:
- Total rows read -- rows that were read from the table before applying
any search conditions
- Total rows pass scan predicates -- if there are scan predicates, this
line indicates how many rows passed the scan predicates [otherwise, the line
is not included]
- Total rows returned -- rows that pass all predicates for the scan and
were returned
Further, if the plan has been executed then individual predicates show the
actual number of evaluations and number of times they were true. Previously
this was only shown for “Detailed and node statistics”.
If a plan has been executed, the root node now contains the following:
- RunTime – the actual active time is always shown. In certain cases it
was not available.
- ReqCountBlockIO / ReqTimeBlockIO
- ReqCountBlockLock / ReqTimeBlockLock
- ReqCountBlockContention / ReqTimeBlockContention -- only if request
timing is enabled with –zt
- CPUTime –- in addition to the estimate, the measured approximate CPU
time is now shown
- QueryMemMaxUseful and QueryMemLikelyGrant –- these are always included
now if the plan was executed.
If a plan has been executed, the row counts for each node are now used to
determine line thickness in the graphical plan viewer. Previously, these
were only available when “Detailed and node statistics” were available.
Formatting changes:
The title for nodes in the graphical plan now includes the number of rows
returned for the node. If the node was invoked multiple times, the invocation
count is also displayed.
Eg.
Table Scan (750 rows/10 invocations)
Scan employee sequentially
When stored procedures appear in a plan, the correlation name for the procedure
is displayed. This allows us to distinguish among multiple instances of the
same procedure.
If a predicate has a cost estimate (for example, it contains a subquery),
then the predicate has a suffix “cost .123 sec” to indicate the estimate
cost per evaluation.
When generating a text plan (EXPLANATION or PLAN), if the plan has actually
been executed (for example, in the RememberLastPlan), then actual row counts
and number of invocations are now included.
When generating a text plan (EXPLANATION or PLAN), if the plan includes
an Exchange, then only the first branch is displayed. There is an indication
of how many branches were present. If the plan was executed, then the row
count of each branch is included separated by semicolons.p4
================(Build #1200 - Engineering Case #785292)================
In some contexts, duplicate rows do not affect the result of a query. For
example, when generating rows for a UNION DISTINCT operation, duplicates
are eliminated.
This change modifies the DerivedTable operator so that in contexts where
duplicates are not needed, the operator eliminates duplicates eagerly. When
the derived table would return a row that is a duplicate of the immediately
previous row, it is eliminated. Duplicate detection is based only on the
prior row so the cost of detection is low but only rows that are immediately
repeated are eliminated.
When eager duplication detection is selected for a plan, the graphical plan
shows “Eliminate duplicates eagerly yes”. For plans with statistics, the
number of duplicates eliminated is shown.
For a query of about 1.5 million rows with many duplicate values, this optimization
can improve run-time by up to 30%.
================(Build #1200 - Engineering Case #785291)================
INSERT statements did not use parallel execution plans. This has been changed
so that parallel plans are now considered for the SELECT block if the other
restrictions of parallel plans are met.
================(Build #1200 - Engineering Case #785289)================
If a query contains an ANY or ALL subquery that is not correlated to the
outer query block, the server may choose an execution plan materializing
all rows of the subquery one time with an index so each row of the outer
block can be compared to the stored results. If the subquery also contained
a UNION where at least one branch required a work table and at least one
branch did not then the plan included work tables under the union for all
branches requiring materialization. These were redundant due to the materialization
at the root and are no longer included.
================(Build #1200 - Engineering Case #785271)================
When estimating how many rows are returned for an ad-hoc join (one that is
not a PK/FK join), histograms on the joined columns are usually used to estimate
how many rows will match. When one or both of the columns are declared as
unique, histograms were previously not considered and in some cases this
caused the number of returned rows to be underestimated due to skew in the
inputs.
This change includes information from the histograms to increase the estimated
number of rows.
================(Build #1200 - Engineering Case #785266)================
During the semantic transformation phase of query processing, the server
normalizes and extends predicates in the query in order to find useful search
conditions.
One step of predicate normalization considers equality predicates that partition
values. Consider:
R.x = 1 AND T.x = R.x ==> T.x = 1
Before this change, this normalization also inferred join conditions, for
example:
R.x = 1 AND T.x = 1 ==> R.x = T.x
The inferred predicate is correct, but it does not help find a faster way
to execute the query. These additional join conditions are no longer generated
when the equality partition contains a constant.
================(Build #1196 - Engineering Case #790977)================
Under very rare timing dependent condition, an index that had long hash values
could have some assertions (for example: 200114 - Can't find values for
row ... in index ...). This has been fixed.
================(Build #1191 - Engineering Case #786183)================
Engineering case 776698 resolved a problem where a domain group was included
in a local group, but users in the domain group were not being located in
the local group (via indirection). It introduced a problem where domain users
explicitly present in the local group were no longer being located. This
problem has been corrected. Indirect lookups are now performed separately
from direct lookups.
================(Build #1191 - Engineering Case #786120)================
In very rare cases, the transaction log can become corrupted. The symptoms
of the corruption can appear as checksum failures on page 0 of the transaction
log. This has been fixed.
================(Build #1191 - Engineering Case #786112)================
When setting the QuittingTime server property using the system procedure
sa_server_option(), parsing of the provided date string did not respect the
date_order or nearest_century options. The date_order was always assumed
to be YMD and the nearest_century was always assumed to be set to 50 , despite
any connection, user, or public settings. This has now been fixed.
================(Build #1188 - Engineering Case #785869)================
The system stored procedure sp_plancache_contents() would have returned incorrect
values within column "build_avg_msec". This has been fixed.
================(Build #1187 - Engineering Case #785858)================
In some cases, dynamic cache resizing on Linux systems might not have behaved
correctly. This has been fixed.
================(Build #1187 - Engineering Case #785851)================
SQL Anywhere installations no longer include PHP drivers. They are now posted
to a web page, but the versions posted only include the .0 release of each
major/minor version.
The PHP external environment attempts to load the external environment DLL
that matches the current phpversion(), which includes the release number.
Unless the release number is 0, or an appropriate driver was previously installed,
the correct driver will not be found and the PHP external environment will
fail to start.
This has been fixed. If a DLL with the full version number is available,
it will be used. Otherwise the DLL with the .0 release number will be used.
e.g. PHP 5.6.5 would use the 5.6.0 DLL.
SQLA 12.0.1 and 16.0.0 should continue to work as before, but the fix was
included to allow for possible future changes.
Workarounds include (one of):
- rename the SQLA PHP modules to a name that will be found
- set up a php.ini file containing the “extension” setting that will load
the SQLA PHP modules
- compile the PHP drivers in the SDK directory to match your PHP installation
================(Build #1183 - Engineering Case #785674)================
In rare cases, the database server could have hung indefinitely on start-up
when running on Windows 7 or later. This was due to a bug in Windows, KB
2719306. This has been fixed so that the server now works around the bug
if it detects that Windows does not have the patch installed.
================(Build #1182 - Engineering Case #785640)================
If the Content-Type header begins with "multipart/" but is not
"multipart/form-data" (e.g. multipart/mixed), the HTTP server would
have returned a 400 error, even though the request itself is valid.
This has been fixed. The body of the request is not parsed for these Content-Types,
nor is it accessible through HTTP_VARIABLE( ‘body’ ). The body may be accessed
through the HTTP_BODY() function.
================(Build #1181 - Engineering Case #785537)================
On Windows systems, if the SQL Anywhere database server was spawned by an
application and that application did not include environment strings (in
particular, the SystemDrive environment variable), then the database server
would not have been able to resolve the location of the ALLUSERSPROFILE folder
correctly. The folder path would have contained an unresolved environment
string, possibly resulting in misplaced files. A check has now been added
for this problem and the current directory will be used instead.
================(Build #1180 - Engineering Case #785455)================
The SingleCLR property is not necessarily numeric. It is a version number,
and, on unsupported platforms, is "NONE." It should not, therefore,
be trackable by the property history feature. This has been fixed.
================(Build #1180 - Engineering Case #785450)================
The version of OpenSSL used by all SQL Anywhere products has been upgraded
to 1.0.1o.
================(Build #1180 - Engineering Case #785391)================
If a batch containing an EXECUTE IMMEDIATE statement used syntax that was
allowed in both Transact SQL and Watcom SQL dialects, and the connection
executed a Transact SQL statement immediately before executing the batch,
“Procedure immediate not found” error could have been returned. This has
been fixed.
As a side effect of this change, to execute procedure [immediate] in a Transact
SQL batch, the user now has to use “exec immediate” or “execute [immediate]”.
================(Build #1177 - Engineering Case #785381)================
In timing dependent cases, the sever may have crashed when calling sa_procedure_profile(),
sa_procedure_profile_summary(), or sa_flush_statistics(), or preparing to
store procedure statistics, while other connection performed an operation
that caused a procedure to be unloaded. This has been fixed.
================(Build #1176 - Engineering Case #785331)================
Under rare circumstances, the server could have hung while the STACK_TRACE
function, or sa_stack_trace procedure, were being called and a procedure
that used a remote table was simultaneously being called. This has been
fixed.
================(Build #1176 - Engineering Case #785330)================
Under some circumstances, running the Index Consultant against workloads
that included queries against remote tables may have caused the server to
crash. This has been fixed.
================(Build #1173 - Engineering Case #785134)================
Under rare circumstances, a long running, memory intensive query could have
caused the server to crash. This has been fixed.
================(Build #1173 - Engineering Case #784735)================
Under rare circumstances, a query that generated a large intermediate result
set containing strings of medium length (usually in the range of 128-256
bytes long) could have crashed the server. This has been fixed.
================(Build #1165 - Engineering Case #784731)================
Under rare circumstances, cancelling a parallel query could have caused a
memory leak. This has been fixed.
================(Build #1164 - Engineering Case #784717)================
Attempting to use the SYNCHRONIZE statement while connected to a server running
on Linux/ARM would have failed with a “feature not supported” error. This
problem has now been fixed.
================(Build #1164 - Engineering Case #784450)================
ST_Distance computations between planar points, or between a planar point
and a non-curve line segment, were inappropriately rounded to the nearest
multiple of the SRS gridsnap value. Consequently, a measured distance less
than the SRS tolerance could have been rounded up to a value greater than
or equal to tolerance, which could have caused the predicate ST_WithinDistance
to return FALSE for a specified distance of zero, even though the predicate
ST_Intersects returned TRUE for the same pair of geometries. This has been
fixed.
================(Build #1163 - Engineering Case #780004)================
Under rare circumstances, a query executed using a parallel bloom filter
operator could have caused a server crash or an assertion failure - “memory
allocation too large”. This has been fixed.
================(Build #1157 - Engineering Case #783734)================
Under rare circumstances, an AUTO/MANUAL text index operation could have
failed to return an error when an error was encountered. This has been fixed.
================(Build #1157 - Engineering Case #782601)================
If the query for which the graphical_plan was being calculated included a
reference to a stored procedure that was expected to return a result set,
but did not do so, the SELECT graphical_plan( … ) statement would have returned
a warning at OPEN time. This has been fixed.
Note, the issue could also have affected some queries referencing such a
stored procedure.
================(Build #1064 - Engineering Case #788051)================
If a server was running on a Unix machine (other than Mac OS X) with multiple
network adapters and the MyIP parameter was used with a link-local IPv6 address
(i.e. one that begins with “fe80::”), clients may not have been able to find
the server using TCP/IP. This has been fixed.
================(Build #1064 - Engineering Case #788026)================
Under rare circumstances, the server may have crashed, or failed an assertion:
“Assertion failed: 109512 Freeing already-freed memory”. This has now been
fixed.
================(Build #1064 - Engineering Case #787646)================
When the database server was running on Unix, calling the system functions
property(‘HTTPAddresses’) and property(‘HTTPSAddresses’) may have returned
duplicate values (eg. “IP1:port;IP2:port;IP1:port;IP2:port”). This has been
fixed.
================(Build #1064 - Engineering Case #786305)================
When using Java external environments on Mac OS X systems, the server may
not have automatically found the latest installed JRE. This has been fixed.
================(Build #1064 - Engineering Case #668971)================
When attempting starting a second server on an already started database,
the second server would have reported permission denied errors. It should
have reported instead “Resource temporarily unavailable”. This only happens
on HP and AIX. This has now been fixed.
================(Build #1063 - Engineering Case #787711)================
Clients using shared memory on Linux could have, in rare circumstances, crashed
and caused the server to crash. This has been fixed.
================(Build #1063 - Engineering Case #785941)================
If the SATMP environment variable was set to a long value (near its limit),
the server may have run into unexpected errors in the shared memory port.
This has been fixed.
Note that the length of the SATMP path is intentionally limited on Linux,
and that has not changed.
================(Build #5860 - Engineering Case #819651)================
SQL Anywhere plugin incorrectly provides the OData option ServiceOperationColumnNames.
This has been fixed.
================(Build #5847 - Engineering Case #819660)================
SQL Central may report an internal error in the Create Database Wizard displaying
collations on an existing server that is older version than the SQL Anywhere
plugin. This has been fixed.
================(Build #4917 - Engineering Case #817664)================
Sybase Central incorrectly limited setting the option parameterization_level
to On or Off. This has been fixed.
================(Build #4907 - Engineering Case #817467)================
In the SQL Anywhere Profiler, clicking the Edit/Edit Filter Expression menu
could cause an internal error to be shown (or nothing to be shown at all),
depending on the profiling data collected. This has been fixed.
================(Build #4906 - Engineering Case #817454)================
Previously, the article properties dialog could show incorrect information
on its "SUBSCRIBE BY Restriction" tab.
If you create a publication with a subscribe by column, when you view the
subscribe by information in SQL Central (or v16 Sybase Central), the subscribe
by column shows up as an expression, and not in the "Column" combobox.
This has been fixed.
A similar problem in the Publication Editor was also fixed.
================(Build #4888 - Engineering Case #817070)================
Sybase Central would report an error for ClientPort (CPORT) values that were
port ranges or combination of ports. This has been fixed.
================(Build #4428 - Engineering Case #791657)================
Attempting to create or delete an external login for all users would have
caused a syntax error. This has been fixed.
================(Build #2110 - Engineering Case #800410)================
When unloading a subset of tables into a new database, the Unload Database
wizard attempts to prevent selecting a table if it will cause the reload
to fail. The wizard would have prevented selecting a table that contained
a column with a domain data type. Selecting a table that contains a column
with a domain data type is now only prevented if the domain is owned by a
user other than SYS.
================(Build #2061 - Engineering Case #798718)================
Attempting to copy and paste column definitions in an unsaved table in the
table editor could have caused SQL Central to crash. This has been fixed.
================(Build #2000 - Engineering Case #797427)================
Searching within the MobiLink plug-in could have caused a "Cannot connect
to database" error. This has been fixed.
================(Build #1471 - Engineering Case #797608)================
Attempting to open the Set Primary Key wizard while a primary key constraint,
foreign key constraint, unique constraint, table check constraint, or column
check constaint was selected in the Constraints tab, would have caused SQL
Central to crash. This has been fixed.
================(Build #1462 - Engineering Case #797151)================
Copying and pasting, or dragging and dropping, an ARTICLE or TABLE onto a
PUBLICATION could have caused SQL Central to crash. This has been fixed.
================(Build #1440 - Engineering Case #796258)================
SQL Central allow for viewing the contents of a database table. It the table's
primary key is calculated (e.g. has a default value of "autoincrement"),
SQL Central could have reported an internal error in the following cases:
- A row was inserted without providing an explicit value for the primary
key, then the column header was clicked to sort the table, or
- A row was inserted without providing an explicit value for the primary
key, a second row was added without an explicit primary key value, and then
an attempted was made to edit one of the rows.
There may be other ways to cause the internal error.
Also, copying a cell containing the special "(DEFAULT)" value
would incorrectly copy text of the form “com.sybase.resultSetTable.DefaultValue@xxxxxx”,
rather than the word "default".
These issues have been fixed. Note, these issues also affected the Interactive
SQL utility, whaich has also been fixed.
================(Build #1390 - Engineering Case #794675)================
If a users only connection to a server was a connection to the utility database,
then attempting to open the server property sheet would have failed with
a permission denied error. Now the property sheet opens but only the General
page is shown.
================(Build #1390 - Engineering Case #794674)================
If a users only connection to a server was a connection to the utility database,
then attempting to open the SQL Anywhere Cockpit would have failed with a
permission denied error. The "Open SQL Anywhere Cockpit" menu item
is now disabled in this case.
================(Build #1390 - Engineering Case #794673)================
If a server was running the utility database along with other databases,
and a user was connected to the utility database only, then attempting to
work with another database on the same server could have resulted in a permission
denied error. Specifically, an error would occur if a database was selected
in the tree or its property sheet was opened. This has been fixed.
================(Build #1390 - Engineering Case #794672)================
The popup menu for a utility database would have contained two consecutive
menu separators. This has been fixed.
================(Build #1390 - Engineering Case #794671)================
If attempting to connect to a database via a Connection Profile failed, then
SQL Central could have crashed. This has been fixed.
================(Build #1236 - Engineering Case #788161)================
In SQL Central, the "Create Service Wizard" allows for creating
a Windows service for a SQL Anywhere server/utility. Each service runs under
a Windows account. By default, the local system account is used, but any
user can be used.
If the user does not already have the Windows "Log on as service"
privilege, a prompt is displayed asking whether to grant it to the user.
If "Yes" was clicked, SQL Central would have failed to grant the
privilege unless SQL Central was running as an administrator. This has been
fixed. Now, the usual elevated privilege prompts are displayed, and the
"Log on as a service" privilege will be granted.
================(Build #1224 - Engineering Case #787707)================
When editing numeric table values in Interactive SQL or SQL Central, the
value typed could have been subject to unexpected rounding errors before
the value was sent to the database. This problem would occur if the value
could not be exactly represented as a 64-bit IEEE 754 floating point number.
It has now been fixed.
================(Build #1217 - Engineering Case #787354)================
Sybase Central can generate documentation for objects in a SQL Anywhere database.
After the files are generated, the user is asked if they want to view the
resulting HTML files. On Mac OS X systems, electing to generate the HTML
files into a directory whose path included a non-ASCII character would have
caused the browser not to open, and Sybase Central would report an internal
error. This has been fixed so that now the browser opens correctly.
Note that the problem was limited to opening the web browser. The HTML files
are generated without issue.
================(Build #6307 - Engineering Case #824774)================
When using the database tool dbunload to unload and/or rebuild a database,
a syntax error message could appear. The message could appear if a table
had several hundred nullable columns (columns with attribute NULL). This
has been fixed.
================(Build #6306 - Engineering Case #824777)================
Unable to ping a Data Lake IQ server, if the ENC options are given only from
the Advanced page. This has been fixed.
================(Build #6288 - Engineering Case #824421)================
Previously, when connected to an ASE database in Interactive SQL (DBISQL),
some statement errors were not reported. This has been fixed.
================(Build #6282 - Engineering Case #824588)================
The product name for Data Lake IQ has been updated in the "Connect"
dialog. The text "SAP HANA Cloud Data Lake" has been replaced
with "Data Lake IQ".
================(Build #6275 - Engineering Case #824190)================
When fetching a result set from a SQL Anywhere database, the rows that are
displayed could be incomplete if there was an error fetching one of the rows.
Further, it was possible that no error message was shown to alert the user.
This has been fixed.
================(Build #6174 - Engineering Case #808508)================
Under some circumstances an client application that is connected using shared
memory link can get the SQL code -85 "Communication error" if prefetching
is turned on and the server sends MESSAGE TO CLIENT messages to client application
as part of the prefetching. This has been fixed.
================(Build #6062 - Engineering Case #821072)================
SQL Anywhere supplied JRE was missing MSVC 12 runtime components. Problems
such as Sybase Central crashing or failing to start could occur as a result.
This has been fixed.
================(Build #6036 - Engineering Case #820939)================
When using the Interactive SQL Generate INSERT or UPDATE statement feature
for a row in the Results window, a date value will have the wrong year in
the generated INSERT statement if the date occurs in the 53rd week of the
year (generally speaking, the last few days of December). For example, if
the result set row contains the following data, then the generated INSERT
statement will contain the wrong year for some of the dates.
6 2020-12-25 2020-12-26 2020-12-27 2020-12-30
INSERT INTO "DBA"."TestTable"
("ID","DateColumn1","DateColumn2","DateColumn3","DateColumn4")
VALUES(6,'2020-12-25','2020-12-26','2021-12-27','2021-12-30')
This has been fixed.
================(Build #6035 - Engineering Case #820905)================
The SQL Anywhere Service Utility (dbsvc) could incorrectly quote the executable
path if provided the path already quoted. This has been fixed.
================(Build #5967 - Engineering Case #820328)================
With macOS 10.15 and iOS 13, Apple introduced new security requirements for
trusted certificates which could cause a server certificate generated by
the createcert utility to be rejected by TLS clients running on these OSes.
The following changes have been made to the createcert utility to address
these new requirements:
1. The minimum RSA key size has been increased from 512 bits to 2048 bits.
2. If a certificate is created with Digitial Signature, Key Encipherment,
or Key Agreement key usages, createcert will automatically add the Server
Authentication and Client Authentication extended key usages.
3. createcert will now set the Subject Alternative Name extension to the
value of the Common Name.
Also, the viewcert utility has been enhanced to display the Extended Key
Usage and Subject Alternative Name fields.
================(Build #5921 - Engineering Case #820003)================
Unloading a MobiLink client database that has a publication with scripted
uploads may report SQLE_EXPRESSION_ERROR. This issue has now been fixed.
================(Build #5824 - Engineering Case #819501)================
Table data can be viewed in SQL Central. In recent builds of the 17.0 software,
data values are incorrectly truncated at 32768 bytes. In older builds of
17.0, and in version 16.0, values were not truncated.
This has been fixed.
A similar bad behavior existed in DBISQL when the TRUNCATION_LENGTH option
was set to 0 (meaning "no limit"). This setting resulted in values
displayed in the "Value of Column" dialog to be truncated at 128K
(a limit imposed by the dialog), without any warning that the displayed value
was incomplete. Now, a warning appears.
================(Build #5788 - Engineering Case #819194)================
DBISQL SYSTEM statement fails to launch executables on Windows. This has
been fixed.
================(Build #5781 - Engineering Case #819148)================
When selecting and copying rows from the Results pane in Interactive SQL,
CHAR and VARCHAR columns are pasted with apostrophes as quotation marks but
LONG VARCHAR columns are not. For example, the results from executing the
following statement are pasted below the statement.
select givenname, surname, cast(city as long varchar) from Contacts;
'Jane','Hildebrand',Kanata
'Larry','Simmon',Kitchener
'Susan','Critch',Yale
This has been fixed. LONG VARCHAR columns are now quoted using apostrophes.
================(Build #5779 - Engineering Case #819163)================
Previously, clicking on a variety of help links in SQL Central and Interactive
SQL could result in a "Topic not found" error dialog being shown
if the documentation was installed locally. This has been fixes so that
the online books are opened.
================(Build #5748 - Engineering Case #818950)================
Previously, if the "Check for software updates and notices" option
in utility DBISQL was unchecked, changing any other option would cause an
internal error when the "Options" dialog was closed. This has been
fixed.
================(Build #4939 - Engineering Case #818163)================
The database unload utility (dbunload/iqunload) generates a reload SQL script
that installs Java classes before setting the location of the Java VM (for
example, java.exe on Windows). Loading Java classes involves starting the
Java VM and this could fail if the LOCATION entry in the sys.sysexternenv
table is required to launch the VM.
When running the reload script, the error message that could appear is:
***** SQL error: External environment could not be started, 'external executable'
could not be found
A work-around is to set the JAVA_HOME environment variable, or to include
the Java executable in the PATH.
This has been fixed. The ALTER EXTERNAL ENVIRONMENT JAVA statement is now
placed before the INSTALL JAVA statements in the reload script.
================(Build #4935 - Engineering Case #818024)================
Service definitions failed to quote the executable path. This has been fixed.
================(Build #4876 - Engineering Case #816795)================
If any of the log scanning tools (dbmlsync, dbremote, dbtran) had to scan
a large portion of the transaction log (roughly 1GB but varies slightly by
tool) and reached the maximum cache size that could be kept in memory, the
log scanning code would still spend significant effort attempting to grow
the cache only to discover that no additional cache could be allocated.
The algorithm is now more efficient when the maximum cache size has been
reached.
================(Build #4114 - Engineering Case #812412)================
ransaction Log utility (dblog) fails to check if the database version is
newer than it can handle. It uses the DBChangeLogNam dbtools library function
to update the database log file entry.
For example, the version 16 DBChangeLogName function will attempt to operate
on a version 17 database, possibly with erroneous results.
Also, the version 16 DBCreatedVersion function reports "16" for
a version 17 database. It fails to determine that the database is actually
newer. These problems have been fixed.
Now DBCreatedVersion will return VERSION_UNKNOWN in the created_version
field for a database store format that is newer than the version of the library
function code.
Possible return values for version 17 DBCreatedVersion include VERSION_UNKNOWN,
VERSION_17, VERSION_16, VERSION_12, and so on.
Possible return values for version 16 DBCreatedVersion include VERSION_UNKNOWN,
VERSION_16, VERSION_12, and so on.
These constants are documented in the dbtools.h header file. The Transaction
Log utility (dblog) will return an error message for a store format that
is newer than it can handle. In such cases, it will return a message similar
to the following:
dblog -t newlog test17.db
SQL Anywhere Transaction Log Utility Version 16.0.0.2618
Unable to open database file "test17.db" – test17.db was created
by a different version of the software. You must rebuild this database to
use it with this version of SQL Anywhere.
In the example, dblog is from version 16 and the database was created with
or upgraded to version 17.
================(Build #4038 - Engineering Case #810461)================
The deprecated SQL Anywhere dbisqlc utility crashed when scrolling the Options
list box which pops up when a "SET" command is executed. This problem
has been fixed.
================(Build #4020 - Engineering Case #810172)================
When dbunload version 17 is run, it creates an unprocessed.sql file in the
current working directory. If that directory is not writable, then dbunload
will fail.
This problem has been fixed.
A new option -ru is provided to specify the location and name of the unprocessed
SQL statements file. It is similar to the -r option which is used to specify
the location and name of the “reload.sql” file.
-ru <file> path of unprocessed SQL file (default "unprocessed.sql")
Also, if the unprocessed SQL statements file cannot be written to the current
working directory or the directory specified by -ru, then it is written to
the temporary files folder specified by either the dbunload -dt option or
by one of the SATMP, TMP, TMPDIR, and TEMP environment variables.
================(Build #4009 - Engineering Case #809361)================
Previously, the OUTPUT statement could write truncated CHAR, VARCHAR, NCHAR,
NVARCHAR, and LONG NVARCHAR values if the TRUNCATION_LENGTH option was set
to a value which was shorter than the length of the data value. Values in
the output file could have also have trailing zero bytes (or '\x00' sequences).
This has been fixed.
================(Build #4004 - Engineering Case #808922)================
After upgrading an encrypted database from a previous version the renamed
transaction log file that was created in the upgrade process could not be
decrypted. Database tools like dblog and dbtran showed the error message
"Log file corrupted (invalid page number found)" for those transaction
log files. This has been fixed.
================(Build #2828 - Engineering Case #805948)================
"Automatically refetch results" is an option that reexecutes the
previous statement that returned a result set if you execute an INSERT, UPDATE,
or DELETE statement.
This option was broken starting in 17.0.0. Instead of showing the updated
result set, no results were shown after the INSERT, UPDATE, or DELETE statement.
This has been fixed.
Further: The option only applies when executing an INSERT, UPDATE, or DELETE
statement following the execution of a statement that returns a single result
set.
================(Build #2169 - Engineering Case #800153)================
The dbisql tool did not return table constraint information by DESCRIBE tablename
in case-sensitive databases if the table name was specified in a different
case. This has been fixed.
================(Build #2165 - Engineering Case #801963)================
On Windows systems, a custom SQL Anywhere install using the MSI file created
by the Deployment Wizard would have failed to create some registry entries.
In particular, the following entries were not set for the indicated versions
and bitness.
Version 16.0 64-bit HKEY_LOCAL_MACHINE\SOFTWARE\SAP\SQL Anywhere\16.0
Version 17.0 64-bit HKEY_LOCAL_MACHINE\SOFTWARE\SAP\SQL Anywhere\17.0
When one of the above registry entries was missing, it may have lead to
problems locating the correct version of software components. The procedure
for creating these entries manually is described here: http://dcx.sap.com/index.html#sqla170/en/html/815fecef6ce21014b8cbe79cfc3ef3a3.html
32-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY
<version>.0
32-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY
<version>.0 Admin
64-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY64
<version>.0
64-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY64
<version>.0 Admin
where <version> is 12, 16, or 17.
When the above registry entries are missing, it is not possible to see the
message text when using the Windows Event Viewer to examine event log entries
created by the database server and other SQL Anywhere components. The procedure
for creating these entries manually is documented here: http://dcx.sap.com/index.html#sqla170/en/html/816136106ce21014b9a68de8836cc659.html
These problems have now been fixed.
================(Build #2086 - Engineering Case #799782)================
If the Unload utility was run with the -ar option ("rebuild and replace
database") when attempting to rebuild an encrypted database from a previous
version of SQL Anywhere that had been involved in replication or synchronization,
the process could have failed with the error:
Unable to open database file "C:\full\path\cons.db" - - C:\full\path\cons.db
no database specified
even though the database existed at "C:\full\path\cons.db". This
has now been fixed.
================(Build #2029 - Engineering Case #797243)================
The Interactive SQL utility (dbisql) could have reported an internal error
on shutdown in some intermittent, timing-dependent cases. This has been fixed.
================(Build #2023 - Engineering Case #797605)================
Executing a statement or a batch of statements which resulted in a lot of
asynchronous messages being sent back to the client could have caused the
Interactive SQL utility to become unresponsive for many minutes when it was
run as a windowed application. This has been fixed.
================(Build #2000 - Engineering Case #797075)================
If a user owned a table or procedure with the same name as a system table
or procedure (eg. a user with an “sa_split_list” procedure), that object
would not have been included if the database was unloaded with Unload Database
utility (dbunload). This has been fixed.
================(Build #2000 - Engineering Case #788758)================
In the "Connect" dialog, the computer mentioned in the "Host"
field can be pinged connecting to a database on a different computer. This
window did not handle the case where multiple hosts were specified. This
has been fix so that now it does.
================(Build #2000 - Engineering Case #743754)================
The syntax highlighting editor paints the background of brackets a different
colour when the caret is adjacent to a bracket. The matching bracket is
also highlighted to more easily identify pairs of brackets.
The colour used for the highlighting should have been (but was not) customizable,
along with all of the other colours the editor uses. This has been corrected
so that the colour used for bracket highlighting can now be set.
================(Build #1458 - Engineering Case #796852)================
In the Interactive SQL utility, the SQL editor can show procedure, function,
and (spatial) method prototypes in a tooltip for the editor. When an opening
parenthesis is typed, the editor communicates with the database to see if
the text to the left of the parenthesis is a procedure so that it can compose
the prototype for the tooltip. The editor is unresponsive for the time needed
for that database check. For slow databases, the editor would have hang for
a couple of seconds when typing an opening parenthesis, which made it unusable.
The editor configuration dialog has a checkbox, "Show tool tips".
The SQL editor would have performed the database check even if this box was
cleared. Now, the database check is skipped if the box is cleared.
KBA 2306369 https://service.sap.com/sap/support/notes/2306369
================(Build #1435 - Engineering Case #796140)================
Clicking a favorite for a database connection could have left the Interactive
SQL utility unresponsive to input with a "Connecting to database"
message shown. This was most likely to happen when there was already a connection
to a database when the favorite was clicked. This has been fixed.
================(Build #1432 - Engineering Case #795982)================
On OS X systems, the "Expression Editor" window, which is part
of the "Query Editor" was displayed correctly only the first time
it was opened. When opened subsequent times, only its title bar appearred.
It was impossible to resize the window back to its normal size. This has
been fixed.
================(Build #1432 - Engineering Case #795980)================
Error messages that were too long to fit on a single line in the History
panel were drawn so they overlapped the statement timing text, making the
error message difficult or impossible to read. This has been corrected so
that the error messages are now line wrapped.
================(Build #1397 - Engineering Case #794961)================
The text completer could have mistaken the keyword "ON" following
a table name as a table alias. If the completer was used to fill in the name
of a column in that table, the column name wouldn have been prefixed by "on.",
which was incorrect. This has been fixed.
================(Build #1397 - Engineering Case #794889)================
The Connect window allows connecting to a SQL Anywhere database using a connection
string, and contains a list of recently used connection strings. Passwords
in the list of connection strings were not removed when they were saved.
This has been corrected so that now they are.
================(Build #1396 - Engineering Case #794832)================
When using the Unload utility to do an online rebuild (dbunload –ao), it
performs a check that the number of rows in a table or materialized view
in the old database matches the new database. This check was not valid for
MANUAL refresh materialized views, which are not refreshed by default during
database reload. Furthermore, if the –g option was specified, and the view
was refreshed during reload, it may still not have been in exactly the same
state as the original view. This has been fixed.
Note that manual materialized views still have to be manually refreshed
after a rebuild unless –g option is specified.
================(Build #1383 - Engineering Case #794821)================
If the Interactive SQL utility (dbisql) was run as a command line program
from an SSH shell (or similar), and the SSH connection was closed, it was
possible for dbisql to then consume 100% of the CPU. This has been fixed.
================(Build #1339 - Engineering Case #792652)================
The "Compare Plans" window could have crashed the Interactive SQL
utility while comparing plans if one plan contained a subquery that was not
in the other plan. This has been fixed.
================(Build #1319 - Engineering Case #791975)================
A review of Java diagnostics revealed an incorrect coding practice that could
have caused the Interactive SQL utility to become unresponsive under rare
circumstances. This has been fixed.
================(Build #1309 - Engineering Case #791613)================
Statements were not added to the "History" tab until they had completed
executing. This had the inadvertent side-effect of delaying all asynchronous
messages associated with the statement from being displayed until after the
statement had completed. This has been fixed so that statements are added
to the "History" panel as soon as they start executing. Asynchronous
messages are displayed as they are received by the Interactive SQL utility.
As part of this change, a second bug was fixed. If an empty asynchronous
message was received, a blank line was not (but should have been) displayed
on the "History" panel. Omitting the blank line prevented subsequent
asynchronous messages from the same statement from being displayed consistently.
This has also been fixed.
================(Build #1303 - Engineering Case #791422)================
The Service utility (dbsvc) for Linux would have failed to start a service
on SuSE 11 systems. The following error message was displayed:
sbin/start-stop-daemon: invalid option -- 'c'
Try `/sbin/start-stop-daemon --help' for more information
This has now been fixed.
================(Build #1298 - Engineering Case #791163)================
The usage screen for the Service utility (dbsvc) for Linux was missing [options]
for all use-cases other than "delete.” They have now been added. Also,
the capitalization of options for -t flag has been corrected. It was also
possible for the wrong PID to be written to the PID file, resulting in failure
to start some services. This has been fixed.
================(Build #1257 - Engineering Case #789269)================
The Text Completer did not include support for the following statements:
ALTER ODATA PRODUCER
CREATE [OR REPLACE] ODATA PRODUCER
DROP ODATA PRODUCER [IF EXISTS]
These have now been added.
================(Build #1257 - Engineering Case #789262)================
If multiple rows were selected in a result table, pressing the Delete key
or clicking the "Delete Row" context menu deleted only the first
selected row, rather than all of the selected rows. This has been fixed
so that all of the selected rows are now deleted.
================(Build #1249 - Engineering Case #788698)================
Changes for Engineering case 768658 prevented the Interactive SQL utility
from committing on shutdown (or when disconnecting) when connected to any
type of database other than SQL Anywhere or SAP IQ. This has been fixed.
================(Build #1238 - Engineering Case #788303)================
The Interactive SQL utility could have crashed after executing a statement,
or after disconnecting from a database. The problem was timing-dependent,
and appeared only on certain computers. It has now been fixed.
================(Build #1235 - Engineering Case #788105)================
When a statement is executed, the SQL is added to the History panel where
its execution time, result set count and update counts are also displayed.
If a statement or batch returned more than about 10 result sets, and the
SQL was longer than the History panel was wide, the SQL could have been displayed
as wide as the panel, effectively pushing the execution time and counts so
far to the right that they could not be seen. This has been fixed.
================(Build #1235 - Engineering Case #788097)================
On Ubuntu systems, creating a service with dbsvc may have resulted in the
following error messages:
update-rc.d: warning: start runlevel arguments (2 3 5 ) do not match SA_demosvc
Default-Start values (2 3 5)
update-rc.d: error: expected runlevel [0-9S] (did you forget "."
?)
A temporary workaround is to run the update-rc.d command manually after
receiving the above error message:
sudo /usr/sbin/update-rc.d SA_demosvc start 60 2 3 5 . stop 80 S 0 1 4 6
.
This has been fixed.
================(Build #1231 - Engineering Case #788025)================
In the Interactive SQL utility, DATE, TIME, and TIMESTAMP table values can
be formatted using the locale rules (the default), or using the database
server's formatting options. When editing a cell value in the "Results"
panel when server formatting was selected, the DATE/TIME/TIMESTAMP value
would have been incorrectly shown using the locale-specific formatting. For
example, on a German computer, if a TIMESTAMP value was edited which the
database server would have rendered as "2015-08-05 15:54:29.768",
the cell editor value would have been "05.08.2015 15:54". Note
the use of periods rather than dashes to separate the parts of the date,
as well as the order of the date parts. Further, if anything other than an
ISO-formatted value was given in the cell editor, it could have been parsed
incorrectly, which would end up inserting an incorrect value into the table.
Now, the cell editor's value uses the same formatting as the server.
This change affects DBISQL if server formatting of dates, times, and timestamps
is selected. It also affects SQL Central which always formats dates, times,
and timestamps using the database's formatting options.
================(Build #1229 - Engineering Case #787888)================
Adding a row to a table from the "Results" panel with a UNIQUEIDENTIFIER
column would have caused Interactive SQL to crash. This has been fixed.
This problem only affected new rows added from the scrolling table component
in the "Results" panel. Editing an existing row was fine. Executing
an explicit INSERT statement was also fine.
This bug also affected the "Data" tab for tables in SQL Central.
================(Build #1221 - Engineering Case #787530)================
An XML value can be viewd from a result set in its own window. That window
contains a tab called "XML" which contains a "Format"
button. Clicking the button formats the XML to make it more readable. f
the column value included a self-closing element which contained whitespace
within the tag, it was not recognized as a self-closing element, and all
subsequent indenting was wrong.
For example, "<e><e /></e><f>Test</f>"
should be formatted
<e>
<e />
</e>
<f>Test</f>
but was incorrectly formatted
<e>
<e /></e>
<f>Test</f>
This has been fixed.
================(Build #1209 - Engineering Case #786957)================
The "Tools/Compare Plans" menu could have been enabled when connected
to a database that was not SQL Anywhere or IQ. Now, the menu is enabled
only when connected to SQL Anywhere or IQ databases.
================(Build #1202 - Engineering Case #786627)================
If an attempt to connect to a SQL Anywhere database failed, an error message
with a "Help" button was shown. Clicking the button opened a web
browser, but the browser would have reported a 404 error (page not found).
This has now been corrected so that the browser opens a page which describes
the error.
================(Build #1193 - Engineering Case #786259)================
XML values can be displayed in their own window by double-clicking them.
That window contains a number of tabs, one of which is "XML Outline",
which renders the XML value as a tree. On non-Windows computers, clicking
on an expandable node in the tree could have expanded the wrong node, or
could have done nothing. This has been fixed.
================(Build #1193 - Engineering Case #786243)================
It was possible for the Interactive SQL utility to have reported an out-of-memory
error in the Import wizard when importing data which contained very long
column values. This has been fixed.
================(Build #1187 - Engineering Case #785827)================
If a service was created that required an ODBCINI setting using the Service
utility (dbsvc) on some Linux distros, the service would have failed to start
or would have behaved incorrectly. This was due to the ODBCINI environment
variable setting not propagating through to the started service. Affected
distros include Red Hat 5, SuSE 12 (when using the LSB service interface),
and possibly others. This has been fixed.
================(Build #1173 - Engineering Case #785128)================
Explicitly specifying the -up option of the Unload utility (dbunload) would
have also turned the –v option on as well. This has been fixed.
================(Build #1167 - Engineering Case #784929)================
If the Broadcast Repeater utility used the -x option was used to stop an
existing dbns, it would work (the first dbns would shut down), but the second
dbns would remain running. This has been fixed.
================(Build #1166 - Engineering Case #784802)================
Rapidly pressing the F5 or F9 keys to repeatedly execute a statement could
have caused the Interactive SQL utility to report an internal exception.
This has been fixed.
================(Build #1165 - Engineering Case #784736)================
If an INPUT or OUTPUT statement completed on a different tab, the "SQL/Execute",
"SQL/Stop" (et al) menus and their associated toolbar buttons were
not enabled correctly; “Execute” was disabled, and “Stop” was enabled. As
a result, even though the statement had completed, it was not possible to
execute any more statements on the tab. This has been fixed.
Note that the problem was specific to the INPUT and OUTPUT statements, and
they had to be running on a tab that was not selected when the statement
completed. If the statement was running in the selected tab, the menus and
toolbar buttons were enabled correctly.
================(Build #1165 - Engineering Case #784723)================
The Interactive SQL utility could have reported an internal error if the
Query Editor was opened after losing the connection to a SQL Anywhere database.
This has been fixed.
================(Build #1164 - Engineering Case #784665)================
The Interactive SQL utility could have reported an internal error when starting
to edit a row of table data and then inserting a new row by opening the context
menu for the row header, rather than the row itself. This has been fixed.
================(Build #1164 - Engineering Case #784649)================
When exporting data to an ASE database, the "Owner" combobox on
the Export Wizard page where a table name is specified could have contained
a given owner name many times. This has been corrected so that now the name
appears only once.
================(Build #1163 - Engineering Case #784567)================
The SQL Anywhere Profiler could have crashed when running the Index Consultant
on an entire workload if the workload contained statements that were executed
by users that have since been dropped. This has been fixed.
================(Build #1064 - Engineering Case #787289)================
Execution of the Unload utility (dbunload) with the online rebuild options
-ao or -aob could have caused it to crash or incorrectly report a syntax
error if run against a version 16 or earlier database. This has been fixed
so a descriptive error is now reported.
================(Build #6042 - Engineering Case #820964)================
The synchronization and replication components could crash on UNIX, if they
cannot find the resource files. This has been fixed.
================(Build #6297 - Engineering Case #824506)================
If dbunload had been used to rebuild a replicating database and the resulting
new database had a different table_id value for table "X" in the
rebuilt database and SQL Remote scanned operations from table "X"
with different table_id values and SQL Remote encountered a SYNCHRONIZE SUBSCRIPTION
command from after the rebuild that involved a subscription that used table
"X", then the SYNCHRONIZE SUBSCRIPTION command would have sent
deletes and inserts for table "X" twice, resulting in a primary
key violation when applied at the remote database. This has now been fixed.
================(Build #3399 - Engineering Case #806992)================
If the log scanning code shared by dbremote, dbmlsync and dbtran was scanning
an offline directory that contained multiple pre-v17 transaction logs AND
at least one v17 transaction log, then all but the earliest pre-v17 transaction
log would be flagged as not needed. This would result in dbremote, dbmlsync
or dbtran complaining that a range of log offsets were missing between the
end of earliest pre-v17 transaction log and the start of the first v17 transaction
log in the offline directory. This has been fixed.
================(Build #2172 - Engineering Case #802216)================
If SQL Remote scanned a STOP SUBSCRIPTION and START SUBSCRIPTION command
for the same subscription, it was possible for SQL Remote to have crashed
if an operation that belonged to this subscription was scanned between the
two commands. This has now been fixed.
================(Build #6097 - Engineering Case #821284)================
Previously, DBISQL and SQL Central could display text with unexpectedly large
(or small)
fonts on high DPI monitors running Windows. This has been fixed.
If you are running Windows 10, and had previously changed the "High
DPI
scaling override" property of scvjiew.exe or dbisql.exe, your fonts
may
be unexpectedly smaller after this change. In that case, disable the
"High DPI scaling override", then restart DBISQL / SQL Central.
================(Build #1351 - Engineering Case #793253)================
On Japanese and Chinese systems, mnemonics for items in the "Connect"
menu were incorrectly in the middle of the menu text, rather than at the
end. This has been fixed.
================(Build #6276 - Engineering Case #824290)================
Attempting to open an UltraLite database on a storage system that did not
support file locks would have resulted in an invalid error: -816 (Specified
database file already in use). If file locks are not supported, the runtime
now signals a warning and continues. The application environment must now
ensure the file is only opened once.
================(Build #5994 - Engineering Case #820702)================
UltraLite on Android now uses OpenSSL 1.1.1d.
================(Build #5965 - Engineering Case #820307)================
After encountering an error SQLE_TOO_MANY_BLOB_REFS during an insert/update,
the database becomes corrupt and certain operations can crash. Validate reports
a corrupt index. This has been fixed.
================(Build #5964 - Engineering Case #820034)================
The string to be searched for argument to the LOCATE function is limited
to 255 bytes. If that argument was larger than 255 bytes, the UltraLite runtime
would crash rather than returning NULL. This has been fixed.
================(Build #5959 - Engineering Case #820306)================
Synchronizations using download truncate table do not properly diagnose conflicts
with pending transactions and likely corrupt those pending transactions.
This has been fixed.
================(Build #5916 - Engineering Case #820048)================
Using file transfer on Android with a secure network stream results in an
error indicating the mlcrsa library could not be loaded (SQL code -1305,
stream error 224 with parameter libmlcrsa17.so.). This has been fixed.
================(Build #5891 - Engineering Case #819887)================
Synchronizing through a web or proxy server would fail if the server requested
HTTP authentication even though the http_user and http_password or http_proxy_userid
and http_proxy_password options were provided. This has been fixed.
================(Build #4947 - Engineering Case #818440)================
It is possible, though unlikely, for a synchronization through HTTP or HTTPS
to fail with stream error STREAM_ERROR_ HTTP_HEADER_PARSE_ERROR if the network
connection is unstable. This has been fixed.
================(Build #4840 - Engineering Case #815884)================
UltraLite clients could fail to sync with "Invalid sync sequence ID
for remote..." errors with certain HTTP intermediaries and multiple
MobiLink servers. This has been fixed.
================(Build #4818 - Engineering Case #815280)================
When inserting into an auto-incrementing table concurrently (on multiple
connections), the last-identity value (which is connection-specific) could
be incorrect. This has been fixed.
================(Build #4802 - Engineering Case #814819)================
When resuming a synchronization, an early interrupt via the observer callback,
or other error, would corrupt the resume metadata and orphan uncommitted
rows in the database. This has been fixed. A side effect of this change is
that resumed-sync statistics appear in observer callbacks right from the
start.
================(Build #3454 - Engineering Case #808430)================
On iOS (and macOS), for timeout-related failures to connect to the synchronization
server, UltraLite reported a read/write timeout or network error rather than
a connection error. When using HTTP, the connection attempt was also prolonged
beyond the specified timeout value. This has been fixed. UltraLite now correctly
diagnoses connection timeout errors.
================(Build #3418 - Engineering Case #807489)================
UltraLite could signal SQLE_DEVICE_ERROR (-305) with error number 200019
when performing a change-encryption-key operation. This is fixed.
================(Build #3400 - Engineering Case #807004)================
MobiLink sync clients now use improved error handling with some HTTP intermediaries.
================(Build #2751 - Engineering Case #803445)================
The table name parameter was incorrect for the error SQLE_ROW_DELETED_TO_MAINTAIN_REFERENTIAL_INTEGRITY.
It is now the table from which the row was dropped, as documented. The error
was previously not signaled in all cases. This has been fixed.
================(Build #2749 - Engineering Case #803411)================
UltraLite allows requests to run concurrently during synchronization, though
there are periods where these requests block (may pause before executing).
Blocking is now eliminated for the majority of download commit activities.
Several new synchronization states are now available:
CHECKING_RI - Performing referential integrity checks related to the download,
and cascading deletes.
REMOVING_ROWS - Removing old rows deleted by the download (part of committing
the download).
CHECKPOINTING - Checkpointing the database. This can happen at various times.
RI checking for synchronization is now more efficient. Performance improvements
depend on your schema and some internal details — from negligible to taking
only a fraction of the time.
================(Build #2749 - Engineering Case #803410)================
If a synchronization download failed with a conflict, or was interrupted,
and the download involved cascading deletes, the download was rolled back
incorrectly (rows from the download persist, for example). This has been
fixed. In addition, database validation now includes RI consistency checking.
================(Build #2003 - Engineering Case #796749)================
UltraLite error parameters were truncated when they exceed roughly 70 bytes
(the size of the parameters field in the SQLCA data structure).
This has been addressed by internal changes to store full error parameter
info, new C++ APIs to access it, and updates to other tools and languages.
Existing APIs based on the SQLCA or ul_error_info structure, which includes
the ULError class, continue to truncate error parameters because of the nature
of those structures.
Here is an example of parameter truncation using the ulsync tool:
Error: Sync failed: Synchronization failed due to an error on the MobiLink
server: [-10002] Message: ODBC: [SAP][ODBC Driver][SQL Anywhere]Primary key
f
ulsync now reports:
Error: Sync failed: Synchronization failed due to an error on the MobiLink
server: [-10002] Message: ODBC: [SAP][ODBC Driver][SQL Anywhere]Primary key
for table 'basic' is not unique: Primary key value ('100') (ODBC State =
23000, Native error code = -193). Table Name: basic. Primary Key(s): 100
[SQLCODE -857/SERVER_SYNCHRONIZATION_ERROR]
For C++, use the new ULConnection::GetLastErrorMessage(), which is equivalent
to ULConnection::GetLastError() and ULError::GetString(), except without
truncation. A new error callback provides full error info as well (see ULDatabaseManager::SetErrorCallbackEx()).
================(Build #1381 - Engineering Case #794181)================
The runtime would have crashed if a temporary table exceeded the maximum
row size. This has been fixed. The runtime will now correctly report the
error SQLE_MAX_ROW_SIZE_EXCEEDED.
================(Build #1353 - Engineering Case #793459)================
When executing a query containing a comparison operator in the WHERE clause,
the UltraLite runtime could have returned incorrect rows, or failed to return
the expected rows. This would have occurred when the rows had NULL values
for the index used to perform the query. This has been fixed.
================(Build #1286 - Engineering Case #790646)================
The UltraLite WinRT component failed to synchronize over HTTPS using non-persistent
HTTP 1.0 connections. This has been fixed.
================(Build #1253 - Engineering Case #788991)================
If an UltraLite client crashed or was terminated in the middle of a download-only
synchronization, it was possible for the client to enter a state where all
subsequent synchronizations would fail with SQLE_UPLOAD_FAILED_AT_SERVER
and the MobiLink log would report mismatched sequence IDs. This has been
fixed.
================(Build #1208 - Engineering Case #786956)================
The UltraLite WinRT component was failing the Windows App Certification Test.
This has now been fixed.
The main impact of this fix is that the Close() methods of the following
classes were renamed to CloseObject(): IndexSchema, TableSchema, DatabaseSchema,
Table, ResultSet, PreparedStatement, and Connection. This is because these
classes implicitly implement the Windows.Foundation.IClosable interface,
which has a Close() method. The CloseObject() method performs actions specific
to the UltraLite component.
================(Build #1174 - Engineering Case #785272)================
The UltraLite Runtime library could have caused a crash when processing nested
queries, typically with at least 32 levels of nesting. This has been fixed.
Now, if UltraLite cannot process such queries due to resource constraints,
a SQLE_RESOURCE_GOVERNOR_EXCEEDED error is signaled.
================(Build #1165 - Engineering Case #784517)================
The Close method of the Connection class of the UltraLite WinRT component
was not visible in the projection to JavaScript, even though it was visible
in the projections to C++ and C#. This has been fixed by the addition of
the method CloseJS to Connection, which is equivalent to Close, and is visible
in the JavaScript projection.
Similarly, the Close method in the following classes were not visible in
the projection to JavaScript:
DatabaseSchema
IndexSchema
PreparedStatement
ResultSet
Table
TableSchema
This has been fixed by adding CloseJS methods to these classes.
================(Build #1064 - Engineering Case #788994)================
On iOS (or Mac OS X), UltraLite synchronizations could have reported a protocol
error on a network failure, rather than succeeding or reporting the correct
stream error. This has been fixed.
================(Build #1384 - Engineering Case #788865)================
An Embedded SQL application using TCHAR datatypes may have encountered a
compile error. This has been fixed.
================(Build #5853 - Engineering Case #819619)================
SQL Central would have crashed or reported a runtime error if it had attempted
to connect to an UltraLite database with a path 128 characters or larger.
This has been fixed.
================(Build #1162 - Engineering Case #784515)================
If UltraLite table values whose primary key contained DATE columns were edited,
SQL Central and the Interactive SQL utility would not have been able to refetch
the edited row values; an error message was shown to the user. If the user
then tried to delete that row, the software would have reported an internal
error. This has been fixed so the error no longer occurs.
This change also fixes the behavior where selecting a row, then clicking
one of the menus from the "Generate" context menu would have generated
a SQL statement that contained a literal DATE value which could not be processed
by the database.
================(Build #4579 - Engineering Case #811101)================
UltraLite now reads goodbye responses from MobiLink when using HTTP.
This was not done previously because of the overhead of another GET and
it was not deemed necessary. This would result in MobiLink waiting before
closing the socket and could lead to unresponsiveness of the MobiLink server.
================(Build #1419 - Engineering Case #795326)================
If an operation was performed that would result in a SQLE_INDEX_NOT_UNIQUE
error, UltraLite would incorrectly report the error as SQLE_PRIMARY_KEY_NOT_UNIQUE.
This has been fixed.
================(Build #2091 - Engineering Case #799942)================
Customers were previously recommened when developing Windows 10 applications
to reference the UltraLite Windows 8.1 libraries. However it was found that
applications referencing these 8.1 libraries may fail the Windows App Certification
Kit (WACK) tests, preventing them from being published to the Microsoft app
store. UltraLite libraries built specifically for Windows 10 that will pass
the WACK are now provided.
A security patch contains an already-released version of the software but includes
updated security components. This process allows the software to be tested more quickly
so that important security fixes are released to customers quickly. Two build numbers
are recorded for security patches: one build number identifies the build of the software
that was previously tested and released; the other is the build of the new security
components that have been updated in the release
The following security patches have been released.