SQL Anywhere Bug Fix Readme for Version 16.0.0, build 2798



Choose a range of build numbers for which to display descriptions.
For example if you want to see what was fixed since the last build you applied then change 1324 to the build number of that last Support Package.
Click Update Readme to make those changes take effect.
to Update Readme

Contents



Description of download types

Bug Fixes

A subset of the software with one or more bug fixes. The bug fixes are listed below. A Bug Fix update may only be applied to installed software with the same version number. While some testing has been performed on the software, you should not distribute these files with your application unless you have thoroughly tested your application with the software.

Minor Release

A complete set of software that upgrades installed security/encryption components while only updating the SQL Anywhere components to the level of the previously released build for a given platform. These are generated so that security/encryption changes can be provided quickly.



16.0.0 Behavior Changes and Critical Bug Fixes

If any of these bug fixes apply to your installation, iAnywhere strongly recommends
that you install this fix.  Specific testing of behavior changes is recommended.


MobiLink - Relay Server

================(Build #1430 - Engineering Case #730770)================ The Relay Server for IIS may have leaked memory. This has been fixed.

SQL Anywhere - Other

================(Build #2621 - Engineering Case #812492)================ The version of OpenSSL used by all SQL Anywhere and IQ products has been upgraded to 1.0.2n. ================(Build #2604 - Engineering Case #812032)================ The version of OpenSSL used by all SQL Anywhere and IQ products has been upgraded to 1.0.2m. ================(Build #2283 - Engineering Case #798416)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1t. ================(Build #2257 - Engineering Case #796406)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1s. ================(Build #2242 - Engineering Case #795323)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1r. ================(Build #2219 - Engineering Case #793255)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. ================(Build #2157 - Engineering Case #786881)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1p. ================(Build #2041 - Engineering Case #773812)================ The version of OpenSSL used by all SQL Anywhere products is now 1.0.1j. ================(Build #1915 - Engineering Case #764130)================ Some additional fixes were required for Engineering case 761751. ================(Build #1824 - Engineering Case #761751)================ The OpenSSL vulnerability known as Heartbleed impacted some components of SQL Anywhere software as follows: - SQL Anywhere Server when using TLS (Transport Layer Security) communications and/or HTTPS web services, though only to the networks that can access the server. Calling external web services over HTTPS from the database server were also affected. - MobiLink Server when using TLS and/or HTTPS communications, though only to the networks that can access the MobiLink server. - Relay Server Outbound Enabler Affected Versions (note that all platforms were impacted by the vulnerability): - SQL Anywhere 12.0.1 builds 3994-4098 - SQL Anywhere 16.0 builds 1690-1880 This vulnerability has been resolved by replacing the OpenSSL libraries with corrected versions. Once this SP has been applied, regenerate any certificates that were being used, and then change any passwords/keys associated with SQLA web service calls or TLS authentication.

SQL Anywhere - Server

================(Build #1529 - Engineering Case #738679)================ If a database created with a version 12 server was run on a version 16 server (without being upgraded), inserting data into a compressed column would have succeed and the data could be correctly retrieved, but if the same database was later moved back to a version 12 server, the data may not have been retrieved correctly. This problem would also have been seen if views, procedures, events, triggers, comments on objects, text configuration objects, or java JAR files were added or altered in the version 12 database while running on the version 16 server (since those system tables contain compressed columns). This has been fixed.



16.0.0 New Features

This section contains a description of new features added since the release
of version 16.0.0.


MobiLink - Java Plugin for Sybase Central

================(Build #1745 - Engineering Case #751936)================ The MobiLink Plugin now supports: The new MobiLink Server –zup switch. This switch can be accessed from the Advanced tab of the property sheet for a MobiLink Server Command Line. An error is reported if –zup is set and –zu is set to false, as this is not allowed. For user authentication policies we now support calling the standard MobiLink authentication scripts: Never, Always or only when an LDAP server could not be found. This change affects the LDAP Servers page of the New User Authentication Policy wizard. It also affects the property sheet for user authentication policies. The setting is also shown in the right hand pane when User Authentication Policies are shown. ================(Build #1453 - Engineering Case #733180)================ In the MobiLink plug-in, the popup menu for a synchronization model now contains a new item, “Duplicate”. This item creates a copy of the synchronization model in the same project. The name the user provides is used for the name of the copy, as well as the script version and publication name values of the new synchronization model. This feature is useful, when there is a working synchronization system and a copy of it is required as a starting place for making the next version of the system. ================(Build #1451 - Engineering Case #733174)================ When the test window in the MobiLink plug-in is opened, it first deploys the synchronization model to the consolidated database and to a newly created remote database. In the past, changes were made directly to the databases to prepare for synchronization. This behavior is now changed so that SQL files are generated containing the changes to be made. The SQL files are then automatically applied to the databases. This is consistent with the way deployment is handled when the deployment wizard is used. This should result in no user visible change in behavior, but it will ensure that going forward behavior seen when testing a synchronization model in the test window is consistent with that seen when the model is actually deployed using the deployment wizard.

MobiLink - Relay Server

================(Build #2041 - Engineering Case #773535)================ Backend farms and backend servers can now be automatically configured with default properties when auto_config=yes is specified in the [Options] section in the Relay Server configuration file. When auto_config is turned on, the Relay Server becomes a Trust On First Use (TOFU) system where Outbound Enablers can connect with unseen backend farm ids and backend server ids. A group of Outbound Enablers belonging to the same backend farm may connect with a farm-wide token. When the Relay Server processes the first Outbound Enabler connection with unseen farm name, a new backend farm configuration will be created. The Relay Server updates the original configuration file and then persists the supplied token in a new backend farm property named token. Other backend farm properties are initialized to the default values. The auto farm configuration persists across Relay Server restarts. Similarly, backend server configuration is also created and persisted per Outbound Enabler with unique server ids within the backend farm. The token supplied by all outbound enablers belonging to the same auto farm must match the farm-wide token, otherwise, access is to be denied. Other backend farm and server configuration can co-exist with auto_config=yes in the configuration file. Backend farms with farm-wide tokens can also be specified in the new token property of the [backend_farm] section in the Relay Server configuration file, so as to reserve the farm name before the Relay Server starts up. This feature is suitable for demos, integration testing, training and other administration free environments. Configurations created by this auto config feature can be further updated in on-line manner by using local rshost or remote AdminChannel. The auto_config property can also be changed in on-line manner using those tools ================(Build #1919 - Engineering Case #763864)================ Most Relay Server customers are not SQL Anywhere customers, but the Relay Server quick setup script was setting up services with a default “SQL Anywhere” prefix in the service name. This change overrides the default prefix to now be “Relay Server” instead in the IIS6 and IIS7/8 quick setup scripts. ================(Build #1453 - Engineering Case #732958)================ The existing affinity flag in the Relay Server Record has been extended to carry a value of ‘x’ when the Relay Server told the client to expire the affinity cookie. This can be useful for troubleshooting.

MobiLink - Synchronization Server

================(Build #2057 - Engineering Case #776338)================ If a MobiLink server receives an HTTP request with a URI of “/status” from a user agent that is not a MobiLink client, it will now respond with a 200 instead of a 404 ================(Build #2048 - Engineering Case #775239)================ The MobiLink server now supports spatial data synchronization against HANA SPS09 databases. Plain INSERT and UPDATE SQL statements can be directly used for the MobiLink server upload_insert and upload_update events if the geometry columns in the sync table are not nullable. Otherwise, stored procedures may need to be used for the upload_insert and upload_update events if the geometry columns are nullable, as HANA does not support null SRIDs, even its geometry data is null. Here are sample stored procedures for the upload_insert and upload_update events for a sync table "test" that is defined as CREATE COLUMN TABLE test ( pk INT NOT NULL PRIMARY KEY, c1 ST_GEOMETRY(4326) ): CREATE PROCEDURE upload_insert_proc ( IN p_pk INT, IN p_geo BLOB, IN p_srid INT ) LANGUAGE SQLSCRIPT AS BEGIN IF :p_geo IS NULL THEN INSERT INTO test (pk, c1) VALUES( :p_pk, NULL ); ELSE INSERT INTO test (pk, c1) VALUES( :p_pk, ST_GeomFromWKB(:p_geo, :p_srid) ); END IF; END; for the upload_insert event, and CREATE PROCEDURE upload_update_proc ( IN p_pk INT, IN p_geo BLOB, IN p_srid INT ) LANGUAGE SQLSCRIPT AS BEGIN IF :p_geo IS NULL THEN UPDATE test SET c1 = NULL WHERE pk = :p_pk; ELSE UPDATE test SET c1 = ST_GeomFromWKB(:p_geo, :p_srid) WHERE pk = :p_pk; END IF; END; for the upload_update event. ================(Build #1733 - Engineering Case #751199)================ The MobiLink server now supports consolidated databases running on an Oracle 12.1 server. In order to use any of these Oracle 12.1 new features: 32K-byte varchar2, nvarchar2, raw data types, and implicit result sets, the build numbers of the SQLA Oracle ODBC driver must be greater than or equal to 1733 and the Oracle OCI library must be installed from the Oracle 12.1 installation image. ================(Build #1723 - Engineering Case #750296)================ Three new features/modifications have been introduced in the MobiLink server: a) The setting for the ldap_failover_to_std property for a user authentication policy is extended to be 0, 1, or 2 [it originally accepts only 0 (FALSE) or 1 (TRUE)). The MobiLink server will authenticate users in the following ways, when ldap_failover_to_std is 0: The MobiLink server will authenticate the user against LDAP server only. If the user cannot be authenticated against a LDAP server, the MobiLink server will fail the sync request, regardless of the types of the errors; 1: The MobiLink server will authenticate the user using the standard script-based user authentication, if and only if the LDAP server(s) are not available. The authentication status, 6000 will be passed to the user authentication scripts, if the LDAP servers are not available; 2: The MobiLink server will authenticate the user against a LDAP server first and then authenticates the user with the standard script-based user authentication, no matter if the user is authenticated or not with the LDAP server. The MobiLink server will pass one of the following values as a user authentication status to the scripts: 1000: if the user is authenticated against the LDAP server; 4000: if the user is not authenticated against the LDAP server; or 6000: if the LDAP servers are not available. b) User authentication using a default authentication policy: The MobiLink server now supports user authentication against a LDAP server using a default user authentication policy. The default policy name can be specified from the new MobiLink server command option: -zup <name> "set default policy name for user authentication (implies -zu+, cannot be used with -zu-)" When a policy name is specified on the MobiLink server command line with this new option, any new MobiLink users that aren’t in the ml_user table will be first authenticated against the LDAP server using this default policy, and then optionally calls the user authentication scripts, if the ldap_failover_to_std property for the default policy is configured with a value of 1 or 2. If the user is fully authenticated, it will be added into the ml_user table and the same user authentication policy will then be used to authenticate this user later. This new command line option implies –zu+ and it cannot be used with –zu-. The MobiLink server will complain, if both –zup and –zu- are given on the command line. Please note: the given default user authentication policy name must exist in the ml_user_auth_policy table , otherwise the MobiLink server will complain and refuses to start. c) The MobiLink user password will be hashed and stored in the ml_user table in the consolidated database, if and only if the ldap_failover_to_std property is configured with a value of 1 or 2. The password will not be saved, if this property is set to 0.

MobiLink - iAS Branded ODBC Drivers

================(Build #1733 - Engineering Case #751198)================ The SQL Anywhere Oracle ODBC driver now supports the following new features when the database is running on an Oracle 12.1 server and the OCI library is from the Oracle 12.1 installation: - The maximum size of the VARCHAR2, NVARCHAR2, and RAW data types has been increased from 4,000 to 32,767 bytes - Implicit result sets can be returned from stored procedures. However the number of implicit result sets is limited to one per stored procedure. The implicit result set will be detected automatically by the ODBC driver regardless of the setting for the “Procedure returns results or uses VARRAY parameters” option in the DSN used by the ODBC application When the database is running on an Oracle 12.1 server and the OCI library is from the Oracle 12.1 installation. ================(Build #1484 - Engineering Case #735343)================ The MobiLink server now supports consolidated databases running on Sybase IQ 16.0 servers. For the recommended ODBC drivers for Windows and Linux, please visit the following link: http://www.sybase.com/detail?id=1011880 The Row Level Versioning (RLV) feature introduced in Sybase IQ 16.0 has removed the “single-writer” limitation. Therefore the IQ 16.0 server now allows multiple connections modifying a RLV enabled table concurrently. Based on testing, the upload would be ten times faster for synchronizations with RLV enabled tables than with RLV disabled tables. Therefore, in order to get better upload performance, all sync tables are recommended to be RLV enabled. However, if there is any table that cannot be RLV enabled, for instance a sync table that contains BLOBs and/or foreign keys, the upload phase must be serialized. This requirement can be achieved, if the begin_upload connection script is written to include or to use the following SQL statement: LOCK TABLE table_name IN WRITE MODE WAIT time_string where table_name is the name of a table that is defined on the IQ store and the time_string gives the maximum time period to lock the table. The table can be as simple as the one defined as: create table coordinate_upload ( c1 int ) It is not required to have any data. If any of the other MobiLink server transactions is required to modify any IQ tables, all of these transactions must be serialized as well. The same logic mentioned above can be used. This technique is considered more efficient than retries on each of the transactions by the MobiLink server.

SQL Anywhere - ADO.Net Managed Provider

================(Build #2185 - Engineering Case #789513)================ SetupVSPackage.exe did not register the SQL Anywhere .NET DDEX Provider with Visual Studio 2015. This problem has now been corrected. ================(Build #1993 - Engineering Case #768717)================ The ADO.NET provider now supports Entity Framework 6. A new dll (iAnywhere.Data.SQLAnywhere.EF6.dll) has been added to %SQLANY%\Assembly\V4.5 directory. SetupVSPackage still registers the v4.5 dll. To use the new Entity Framework 6 provider, register it in app.config or web.config. For example: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </configSections> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup> <connectionStrings> <add name="Entities" connectionString="metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=iAnywhere.Data.SQLAnywhere;provider connection string='datasourcename=&quot;SQL Anywhere 12 Demo&quot;'" providerName="System.Data.EntityClient" /> </connectionStrings> <system.data> <DbProviderFactories> <clear /> <add name="SQL Anywhere 12 Data Provider" invariant="iAnywhere.Data.SQLAnywhere" description=".Net Framework Data Provider for SQL Anywhere 12" type="iAnywhere.Data.SQLAnywhere.SAFactory, iAnywhere.Data.SQLAnywhere.EF6, Version=12.0.1.41474, Culture=neutral, PublicKeyToken=f222fc4333e0d400" /> </DbProviderFactories> </system.data> <entityFramework> <providers> <provider invariantName="iAnywhere.Data.SQLAnywhere" type="iAnywhere.Data.SQLAnywhere.SAProviderServices, iAnywhere.Data.SQLAnywhere.EF6, Version=12.0.1.41474, Culture=neutral, PublicKeyToken=f222fc4333e0d400" /> </providers> </entityFramework> </configuration> ================(Build #1766 - Engineering Case #752772)================ Conversion of NUMERIC/DECIMAL columns to the .NET DECIMAL type has been improved.

SQL Anywhere - ODBC Client Library

================(Build #1444 - Engineering Case #731978)================ ODBC (and JDBC) escape sequence support has been enhanced to include the following functions. {fn TIMESTAMPADD(<interval>, <integer-expr>, <timestamp-expr>)} Returns the timestamp calculated by adding <integer-expr> intervals of type <interval> to <timestamp-expr>. Valid values of <interval> are shown below. {fn TIMESTAMPDIFF(<interval>, <timestamp-expr1>, <timestamp-expr2>)} Returns the integer number of intervals of type <interval> by which <timestamp-expr2> is greater than <timestamp-expr1>. Valid values of <interval> are shown below. These escape functions are mapped directly to the SQL Anywhere DATEADD/DATEDIFF functions. The <interval> type can be one of the following: <interval> SQL Anywhere DATEADD/DATEDIFF date-part mapping ========================= ========================================== SQL_TSI_YEAR YEAR SQL_TSI_QUARTER QUARTER SQL_TSI_MONTH MONTH SQL_TSI_WEEK WEEK SQL_TSI_DAY DAY SQL_TSI_HOUR HOUR SQL_TSI_MINUTE MINUTE SQL_TSI_SECOND SECOND SQL_TSI_FRAC_SECOND MICROSECOND Examples: // Number of days in February, 2013 SELECT {fn TIMESTAMPDIFF(SQL_TSI_DAY, '2013-02-01T00:00:00', '2013-03-01T00:00:00' )} 28 // Timestamp for 28 days after February 1, 2013 SELECT {fn TIMESTAMPADD(SQL_TSI_DAY, 28, '2013-02-01T00:00:00' )} 2013-03-01 00:00:00.000000

SQL Anywhere - OData Server

================(Build #2023 - Engineering Case #771963)================ A new OData Producer option has been created which allows service operations to use the names of the result set columns from the database when naming the properties of the ComplexType used in the ReturnType. Previously, if a stored procedure returned a result set, the OData Producer would only have used generated names (rtn1, rtn2, rtn3, …). A side effect when using the new option is that it is possible that column names from the database are invalid OData identifiers, or there are duplicate column names in the result set, which can produce invalid metadata. In this situation, users will need to either change the names of the result set columns in the database, write a wrapper stored procedure with different columns names in the result set, or revert to using generated column names. Option [producer-name].ServiceOperationColumnNames = { generate | database } Description Specifies whether the names of the columns in the metadata should be generated (rtn1, rtn2, …) or whether the names from the database should be used. The default setting is generate. ================(Build #1648 - Engineering Case #746461)================ The OData Producer now respects Content-Encoding and Accept-Encoding HTTP request headers as specified by the HTTP 1.1 spec: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html The Content-Encoding header is used by clients to indicate the encoding of the request body. The Accept-Encoding header is used by clients to indicate the preferred encoding of the response body.

SQL Anywhere - Other

================(Build #1725 - Engineering Case #742768)================ The Deployment wizard would have accepted an invalid product code. Now, the Next button is disabled until a valid product code is entered. ================(Build #1691 - Engineering Case #749465)================ Previously, an Oracle JRE was shipped with the software for use by clients. Now, the SAP JRE is shipped instead. Upgrading overwrites the JRE directory (%SQLANY16%\binXX\jre170) and its subdirectories. If you are using certificates, then your certificate store (%SQLANY16%\binXX\jre170\lib\security\cacerts) is overwritten, including your certificates. Similarly, fonts you added to the %SQLANY16%\binXX\jre170\lib\fonts\fallback directory to help display characters in the administration tools may be lost. To minimize upgrading steps with regards to the JRE change, create a backup copy of the JRE directory and all of its subdirectories before you upgrade so that you can refer to or restore files (such as cacerts) from the backup, as needed. To restore settings, use the java_vm_options option (SQL Anywhere), and/or the -sl java option (MobiLink) to optimize your Java VM startup settings. ================(Build #1670 - Engineering Case #749256)================ Strong encryption now achieved using OpenSSL -------------------------------------------- Prior to this change, SQL Anywhere included a Certicom encryption module that provided strong encryption used throughout the software. Now, SQL Anywhere includes an OpenSSL encryption module for the strong encryption. The Certicom encryption module has been removed. Read the following descriptions to determine how you may be impacted by this change. FIPS encryption now requires the private key of an identity file to be encrypted using AES - OpenSSL FIPS supports AES encryption for the private key of an identity file. New servers using the OpenSSL FIPS encryption module will not start when using an identity file that has its private key encrypted with 3DES. You must re-encrypt the identity file using AES. To do this, run a command similar to the following using an upgraded viewcert utility: viewcert -p -o new-file-name -op new-password -ip old-password old-file-name The new and old passwords can be the same. - The sample server identity file (rsaserver.id) and client identity file (rsaclient.id) have been modified so that the private keys are encrypted using AES rather than 3DES. - Versions of the server that use the Certicom encryption module will not start when using an identity file that has its private key encrypted using AES. Trusted root certificate files specified using trusted_certificates do not need to be modified. Self-signed certificates must now have the Certificate Signing attribute set Self-signed certificates must now have the Certificate Signing attribute set when using the identity encryption option (for example, the -x mlsrvXX and -xs dbsrvXX options). To determine if a certificate has the Certificate Signing attribute set, use the viewcert utility and look for the Certificate Signing attribute in the Key Usage portion of the output. If your self-signed certificates do not have the Certificate Signing attribute set, then you must regenerate the certificates. Create Certificate utility (createcert) now uses AES encryption instead of 3DES The Create Certificate utility (createcert) now uses AES rather than 3DES encryption for encrypting the private key in the server identity file. A new option, -3des, has been added to the Create Certificate utility. Use this option when you want to create a 3DES-encrypted server identity file that can be used by both new and old servers. Note that new servers running in FIPS mode cannot start using 3DES-encrypted certificates; however, if you are not running in FIPS mode, then you can use 3DES-encrypted certificates. View Certificate utility (viewcert) now uses AES encryption instead of 3DES The View Certificate utility (viewcert) now uses AES rather than 3DES encryption when you specify the -p option to PEM-encode the output and when you specify the -ip and -op options to set the password. A new option, -3des, has been added to the View Certificate utility to allow you encrypt output and passwords using 3DES instead of AES. Database server now loads the FIPS driver file, dbfipsXX.dll, at startup Previously, the 32-bit Windows database server loaded the FIPS driver file, dbfipsXX.dll, only when needed. Now, the 32-bit Windows database server always attempts to load dbfipsXX.dll at startup, and keeps it loaded for the life of the server. If loading dbfipsXX.dll fails, then an error is returned only when an attempt is made to use FIPS encryption. Deploying FIPS If you are deploying FIPS encryption, then there are new shared libraries to deploy; these files are included in your software. The former files, sbgse2.dll and libsbgse2.so, are no longer installed by the software. The new files to deploy are: - Windows 64-bit: libeay32.dll, ssleay32.dll, and msvcr100.dll - Windows 32-bit: libeay32.dll, ssleay32.dll, and msvcr90.dll - Linux: libcrypto.so and libssl.so Note: On Windows, although 32-bit and 64-bit FIPS-certified OpenSSL libraries for encryption are provided, you must use the 64-bit libraries on a 64-bit system. MobiLink-related changes and information Connecting to a MobiLink server using client-side certificates now requires the Digital Signature certificate attribute to be set TLS/SSL connections to a MobiLink server using client-side certificates now require the client-side certificate to have the Digital Signature attribute set. If the attribute is not set, then the connection will fail. To determine if a certificate has the Digital Signature attribute set, use the View Certificate utility (viewcert) and look for the Digital Signature attribute in the Key Usage portion of the output. If your client-side certificates do not have the Digital Signature attribute set, then you must regenerate the certificates. FIPS-based end-to-end encryption now requires the private key to be encrypted using AES If the private key file provided to a MobiLink server by the e2ee_private_key file option of the –x command-line option is encoded using 3DES and you are running in FIPS mode, then the private key file needs to be regenerated with the private key encrypted using AES. How to update a MobiLink deployment that uses non-FIPS TLS/SSL (includes HTTPS) and client-side certificates 1. If your client-side identity certificates do not have the Digital Signature attribute set and the client connects directly to the MobiLink server, then you must regenerate and deploy client-side certificates with the Digital Signature attribute set. 2. Update the server-side binaries. 3. Update the client-side binaries. How to update a MobiLink deployment that uses FIPS, TLS/SSL (includes HTTPS) and client-side certificates These steps update the client identity certificates twice if the Digital Signature attribute is missing from client-side identity certificates. This procedure can make the update less disruptive because synchronizations can continue without having to coordinate the client-side and server-side updates to occur at the same time. 1. If your current client-side identity certificates do not have the Digital Signature attribute set and the client connects directly to the MobiLink server, then you must regenerate and deploy client-side certificates with the Digital Signature attribute set. 2. Update the server-side binaries (remembering to include the new FIPS driver files) and deploy server identity certificates with AES-encrypted private keys. 3. Update the client-side binaries (remembering to include the new FIPS driver files) and deploy client identity certificates with AES-encrypted private keys. How to update a MobiLink deployment that uses FIPS and end-to-end encryption 1. Regenerate the primary key file referenced by the e2ee_private_key encryption option. 2. Shut down the MobiLink server. 3. Update the MobiLink server binaries, remembering to include the new required FIPS driver files. 4. Change the e2ee_private_key option to point to the new private key file (or replace the old file), updating the e2ee_private_key_password, if required. 5. Restart the MobiLink server. Note, connecting to and creating a MobiLink server resource in the 32-bit Windows SA Monitor that will use FIPS encryption to connect to the MobiLink server, may fail with the error, “Failed to load library mlcrsafips12.dll”. To work around this, you can either not use FIPS encryption, or use 64-bit SA Monitor. Similarly, connecting to a database server from 32-bit DBISQL, DBConsole, or Sybase Central using FIPS encryption, may also fail with the error, “Failed to load library mlcrsafips12.dll”. To work around this problems, you can either not use FIPS encryption, or use 64-bit client software.

SQL Anywhere - Server

================(Build #2231 - Engineering Case #794347)================ The server could have failed an assertion or fail to create some valid round-earth geometries. This has been fixed. ================(Build #2230 - Engineering Case #794129)================ The server performs performance rewrites on ISNULL and COALESCE function argument lists based on the nullablility of the arguments. More rewrites can now also be done if the argument contains a referencing old or new column of a trigger. ================(Build #2107 - Engineering Case #781692)================ Applications that fetched result sets using the SQL Anywhere JDBC Driver would find that the fetch performance was significantly impacted if the result set contained LOB columns. An enhancement has now been made such that fetch performance using the SQL Anywhere JDBC Driver is greatly improved for LOB columns ================(Build #2049 - Engineering Case #775142)================ When a statement has multiple search conditions on a single column, the server applies optimizations to combine and simplify these for two reasons: - To identify a sargable predicate that could use an index scan. - To improve the estimation of selectivity in order to make a better choice of access plan For some types of queries that involved an OR or an IN list predicate, the first goal was satisfied reasonably well but the OR / IN list could have been retained, affecting selectivity estimation. In some cases, this could lead to underestimating the number of rows returned from a table scan, potentially leading to execution plans with higher costs. This has been improved so that when multiple search conditions on a column contain an OR and/or an IN-list predicate, the predicate is simplified further. ================(Build #2033 - Engineering Case #772559)================ SQL Anywhere, MobiLink, and UltraLite, servers and clients, no longer support the SSLv3 protocol. All TLS connections must now be TLSv1 or higher. ================(Build #1933 - Engineering Case #764913)================ The following features are not supported on Linux/ARM and these features have now been disabled on this platform: - Remote Data Access - External Stored Procedures (note these are native dlls and shared objects that are loaded in process) - External Environments (including JAVA, CLR, PERL, PHP, C_ODBC and C_ESQL external environments) - LDAP UA - Kerberos Authentication ================(Build #1882 - Engineering Case #761933)================ In rare cases, the server may have crashed while parsing an incorrect IN search condition. This has been fixed. ================(Build #1688 - Engineering Case #747805)================ For Syntax 2 of the DELETE statement and Syntax 2 of the UPDATE statement the error detection behaviour of the server has been improved. These two syntax forms allow an additional FROM clause that may contain the table-name of the updated or deleted table, for example: DELETE FROM [owner.]table_1 [ [ AS ] correlation-name ] FROM [owner.]table_1 [ [ AS ] correlation-name ] ... WHERE ... and UPDATE [owner.]table_1 [ [ AS ] correlation-name ] SET columns_1 = ... FROM [owner.]table_1 [ [ AS ] correlation-name ] ... WHERE ... If the DELETE or UPDATE clause and the additional FROM clause have a table reference that contains the same table name, in the above example "table_1", then the server can only decide whether both are identical table references if one of the following conditions is true: - both table references are not qualified by specifying a user ID - both table references are qualified by specifying a user ID - both table references are specified with a correlation name In cases where the server cannot decide whether the above table references are identical or not it will now return an SQL error to prevent the user from unintended semantics like deleting and updating to many rows. ================(Build #1675 - Engineering Case #747798)================ A new system function has been added, READ_SERVER_FILE(). This function reads data from a specified file on the server and returns the full or partial contents of the file as a LONG BINARY value. Syntax: READ_SERVER_FILE( filename ) [, start [ , length] ] Parameters: - filename LONG VARCHAR value indicating the path and name of the file on the server. - start The start position of the file to read, in bytes. The first byte in the file is at position 1. A negative starting position specifies the number of bytes from the end of the file rather than from the beginning. * If start is not specified, a value of 0 is used. * If start is zero and length is non-negative, a start value of 1 is used. * If start is zero and length is negative, a start value of -1 is used. - length The length of the file to read, in bytes. * If length is not specified, the function reads from the starting position to the end of the file. * If length is positive, the function read ends length bytes to the right of the starting position. * If length is negative, the function returns at most length bytes up to, and including, the starting position, from the left of the starting position. Returns: LONG BINARY Remarks: This function returns the full or partial (if start and/or length are specified) contents of the named file as a LONG BINARY value. If the file does not exist or cannot be read, NULL is returned. filename is relative to the starting directory of the database server. The READ_SERVER_FILE function supports reading files larger than 2GB. However, the returned content is limited to 2GB. If the returned content exceeds this limit, a SQL error is returned. If the data file is in a different character set, you can use the CSCONVERT function to convert it. You can also use the CSCONVERT function to address the character set conversion requirements you may have when using the READ_SERVER_FILE server function. If disk sandboxing is enabled, the file referenced in filename must in an accessible location. Privileges: When reading from a file on a client computer: * You must have the READ FILE system privilege. * You must have read permissions on the directory being read from. Standards: SQL/2008 Vendor extension. Example: The following statement reads 20 bytes in a file, starting from byte 100 of the file. SELECT READ_SERVER_FILE( 'c:\\data.txt', 100, 20 ) See also * xp_read_file system procedure * CSCONVERT function [String] * Disk sandboxing ================(Build #1674 - Engineering Case #747277)================ A new database property, BackupInProgress, has been added. Querying the property will return 'on' when there is a backup happening, and 'off' otherwise. ================(Build #1673 - Engineering Case #747205)================ The geospatial method ST_BUFFER is now supported for all geometry types. This method is compatible with the SQL/MM and OGC standards. ST_BUFFER returns the ST_Geometry value that represents all points whose distance from any point of an ST_Geometry value is less than or equal to a specified distance in the given units. ST_GEOMETRY::ST_BUFFER( distance double, unit_name long varchar ) - distance: The distance the buffer should be from the geometry value. Must be greater than or equal to 0. - unit_name: The units in which the distance parameter should be interpreted. Defaults to the unit of the spatial reference system. The unit name must match the UNIT_NAME column of a row in the ST_UNITS_OF_MEASURE view where UNIT_TYPE is 'LINEAR'. - Returns the ST_Geometry value representing all points within the specified distance of the original geometry. The ST_Buffer method generates a geometry that expands a geometry by the specified distance. This method can be used, for example, to find all points in geometry A that are within a specified distance of geometry B. The distance parameter must be a positive value. This method will return an error if distance is negative. If the distance parameter is equal to 0, the original geometry is returned. The ST_Buffer method is best used only when the actual buffer geometry is required. Determining whether two geometries are within a specified distance of each other should be done using ST_WithinDistance instead. ================(Build #1665 - Engineering Case #746935)================ The dbo.sp_list_directory() stored procedure can be used to obtain information about directories and files that are accessible to the SQL Anywhere Server. Currently the sp_list_directory() procedure returns the following three columns: file_path long nvarchar the path of the server accessible file or directory file_type nvarchar(1) either F for file or D for directory file_size unsigned bigint the size of the file or NULL for directories In order to provide more information about the various files and directories, dbo.sp_list_directory() has now been enhanced to return five additional columns. These five additional columns are: owner nvarchar(128) the owner of the file or directory create_date_time* timestamp with time zone the date and time the file or directory was created modified_date_time* timestamp with time zone the date and time the file or directory was last modified access_date_time* timestamp with time zone the date and time the file or directory was last accessed permissions varchar(10) the set of access permissions for the file or directory All other aspects of dbo.sp_list_directory() – including the set of system privileges and secure feature privileges – remain unchanged. A database either has to be upgraded or initialized in order for applications to obtain this new information from dbo.sp_list_directory(). In addition, if an upgraded or newly initialized database is subsequently moved to an older version of the server, then the new columns will continue to be returned but the values of the new columns will be NULL. ================(Build #1614 - Engineering Case #744027)================ The SQL Anywhere PHP External Environment supports several versions of the PHP interpreter. The SQL Anywhere install bundle includes a separate PHP external environment dll or shared object for each supported version of PHP. In addition, whenever support for a new version of the PHP interpreter is added, the SQL Anywhere install bundle is updated to include the new PHP external environment dll or shared object for the new version of the PHP interpreter. Going forward, the SQL Anywhere install bundle will no longer be updated with additional PHP external environment dlls or shared objects when support for new versions of the PHP interpreter are added. Instead, the new PHP external environment dlls and shared objects will now only be available on the download site. ================(Build #1537 - Engineering Case #737497)================ Previously, the CREATE INDEX statement for local temporary tables on read-only nodes had been disallowed. This has been changed, and now local temporary tables are the only tables where index creation is allowed on the read-only databases. ================(Build #1473 - Engineering Case #734038)================ The database property TimeWithoutClientConnection has been added. The description for this database property is: Returns the elapsed time in seconds since a CmdSeq or TDS client connection to the database existed. If there has not been a CmdSeq or TDS connection since the database started then the time since the database started is returned. If one or more CmdSeq or TDS connections are currently connected, 0 is returned.

SQL Anywhere - Sybase Central Plug-in

================(Build #1537 - Engineering Case #739081)================ Inherited object privileges can now be viewed for any table, view, procedure, function, sequence generator, or dbspace via the “Privileges” tabs. Also, inherited object privileges can now be viewed for any user or role via the “Table Privileges”, “View Privileges”, “Procedure Privileges”, “Sequence Privileges”, and “Dbspace Privileges” tabs. In both cases, a new “Show Inherited” check box has been added to the tabs. With the check box checked, the tabs show privileges that are inherited through role inheritance, in addition to privileges that are granted explicitly.

SQL Anywhere - Utilities

================(Build #2160 - Engineering Case #786658)================ Starting in version 16, the Interactive SQL utility displayed a warning on shutdown (or disconnect) if there were uncommitted database changes, and the option to commit on exit was not enabled. The window that contains that warning now has a checkbox which allows for suppressing the warning. It can also be disabled, or re-enabled, by going to the Options dialog and the SQL Anywhere -> Execution tab. ================(Build #2041 - Engineering Case #773529)================ The EncryptedPassword (ENP) connection parameter is used to specify an encrypted password. It is a substitute for the Password (PWD) connection password. The intent of the ENP connection parameter is to disguise the actual password used to authenticate to a database. The current implementation obfuscates the password. There are some issues with this implementation: - The encrypted password could have been used to authenticate to the database by anyone from any computer who also had the corresponding user ID. - The ODBC Configuration for SQL Anywhere dialog of the ODBC Data Source Administrator (Windows-only) could have been used to return the encrypted password in clear text. - The encrypted password could have been decrypted with some effort. Encrypted password support has been enhanced with the following goals in mind: - Ability to restrict use to a particular computer or a particular computer/user. - Inability to reverse-engineer an encrypted password using the ODBC Configuration for SQL Anywhere dialog. - Better encryption algorithms to ensure that the encrypted password cannot be decrypted. - These enhancements to be available across all supported client platforms. A password can be encrypted on a computer such that it can only be decrypted on that computer. Anyone who can log on to the computer can use the encrypted password and corresponding user ID to authenticate to a database. It cannot be used on any other computer. A password can be encrypted on a computer by a user such that it can only be decrypted on that computer for that user. It cannot be used on any other computer by the same or other user. The Data Source utility (dbdsn) supports a new option -pet a|c|u, a specifying how the encrypted password may be used. If -pet a is specified, the password is encrypted for use on any computer. If -pet c is specified, the password is encrypted for use on this computer only. If -pet u is specified, the password is encrypted for use on this computer by this user only. The -pe option which provides simple obfuscation continues to be supported; however, its use is deprecated. Note that encryption for options -pet c and -pet u must be performed on the computer or computer/user for which it is intended to be used (decrypted). Note also that -pet u is not appropriate for client applications that are implemented as Windows services. Although -pet u can be used for a Windows System DSN, it may be more appropriate to use -pet c (since this form of encrypted password can be used by any user of the computer). The ODBC Data Source Administrator dialog is changed as follows: - The Encrypt password option is no longer a checkbox but is now used to select from different encryption options including none, for use on any computer, for use on this computer only, and for use on this computer and this user only. - The dialog can no longer be used to change the level of password encryption for an existing password, unless it was previously unencrypted. If the level of encryption is to be changed, then the password must be reentered. The ODBC Configuration for SQL Anywhere driver for Oracle dialog is also modified accordingly. These features allow a database administrator to restrict database access to a user on a particular computer without revealing the actual plain text password to the user. It also prevents the current password from being decrypted to memory and consequently subject to inspection. When successful decryption is restricted to a particular computer or computer/user, it no longer matters that the encrypted password is presented in plain text. For example, the encrypted password in the following connection string cannot be used by anyone other than the computer/user for whom it was created. dbping -d -c "Host=server-pc; Server=DemoServer; UID=DBA; ENP=05a17731bca92f97002100c39d906b70f3272fe76ad19c0e8bd452ad4f9ea9" The new encrypted password features are not supported by client libraries prior to this change. The File Hiding utility (dbfhide) options -wm (computer-only) and -w (computer/user-only) are now supported on all platforms for which dbfhide is available. The dbfhide tool can be used to encrypt an entire connection string to a file for use by most of the database tools that accept connection strings (e.g., dbping -d -c @credentials.hidden). Note that the encrypted password support described here is a client feature (storage of passwords on the client) and should not be confused with encryption of passwords over the wire during authentication, a default feature of SAP SQL Anywhere and optional in products like SAP Adaptive Server Enterprise (ASE). ================(Build #2030 - Engineering Case #772826)================ When the Interactive SQL utility (dbisql) was run as a console application and connection parameters were specified, but the database password was not, dbisql would have exited without connecting. Now, dbisql will prompt for a password, given that the "-q" (quiet) option was not specified. Since a password is not required on the command line, the password will not be visible with OS utilities used to view process command lines.

UltraLite - Runtime Libraries

================(Build #2133 - Engineering Case #784448)================ Support for Windows Phone 8.1 and Windows 8.1 has now been added. Note, existing support for Windows Phone 8.0 and Windows 8.0 is unchanged. ================(Build #1760 - Engineering Case #753086)================ UltraLite now supports Xcode 5 and iOS 7. Version 16 includes 64-bit libraries for the new A7 (arm64) chip along with the 64-bit simulator. ================(Build #1716 - Engineering Case #750618)================ UltraLite is now supported for Windows 8 store applications and Windows Phone 8 applications. Each bottom level directory in the following tree under the SQLA install root contains a WinRT-based component (UltraLite.winmd/UltraLite.dll) that implements the UltraLite API for the noted platform. UltraLite +-- WinRT |-- WindowsPhone | +-- 8.0 | +-- arm : Windows Phone 8 devices | +-- x86 : Windows Phone 8 emulator +-- Windows +-- 8.0 +-- arm : Windows RT ARM-based devices |-- x64 : Windows 8 store apps for x64 architecture +-- x86 : Windows 8 store apps for x86 architecture Developing UltraLite applications using this software requires SQL Anywhere 16 and Microsoft Visual Studio 2012 or later. Developing Windows Phone applications requires the Windows Phone SDK 8.0, which is available from: http://dev.windowsphone.com/en-us/downloadsdk Windows Phone SDK 8.0 requires Windows 8. Microsoft lists the complete system requirements at: http://www.microsoft.com/en-us/download/details.aspx?id=35471 Developing Windows store applications requires the Windows SDK for Windows 8, which is available from: http://msdn.microsoft.com/en-us/library/windows/desktop/hh852363.aspx The UltraLite.WinRT directory under the SQLA samples root contains a Visual Studio 2012 solution for the CustDb sample that appears in various forms for other UltraLite supported platforms.

UltraLite - UltraLite Engine

================(Build #2147 - Engineering Case #786041)================ UltraLite is now supported for 32-bit Linux. Users should take note of the following special instructions for installation. When installing for the first time, or overwriting a current installation, then the option "1. Create a new installation" should be selected and then the components desired to be installed can be selected. Note that 32-bit UltraLite will be available for install. If, instead, the user wishes to upgrade an existing install, then the setup program must be run twice. The first time, to install the new feature, 32-bit UltraLite, choose the menu item "2. Modify an existing installation". Then run the setup again, to update all of the rest of the files, choose the menu item "3. Upgrade an existing installation". ================(Build #2023 - Engineering Case #771766)================ UltraLite will now use the system’s trusted roots if no trusted root certificate is provided. Also, the install package now includes a compiled library in the usual place. The script build.sh is no longer used to produce the library after installing.

UltraLite - UltraLite.NET

================(Build #1426 - Engineering Case #727144)================ When executing SQL statements and queries with many parameters, they could have taken a long time to complete in an UltraLite.Net application. This has been fixed.



16.0.0 Bug Fixes

(see also Critical Bug Fixes) (see also New Features)
This section contains a description of bug fixes made since the release
of version 16.0.0.

MobiLink - Java Plugin for Sybase Central

================(Build #2120 - Engineering Case #783219)================ In MobiLink projects with an ASE consolidated database, the "Tables (by Owner)" container in Sybase Central could have contained the same owner (e.g. "dbo") multiple times. This has been corrected so that a given owner is now listed only once. ================(Build #2118 - Engineering Case #783058)================ The "Deploy Synchronization Model" wizard in the MobiLink plug-in could have crashed when attempting to browse the MobiLink users in a database and incorrect credentials were provided to the database. This has been fixed. ================(Build #2088 - Engineering Case #779884)================ Generating MobiLink sync models would have failed when the consolidated database type was SQL Anywhere using a Turkish collation. In addition, failures could have occurred with any SQL Anywhere collation if a user created a table named SYSTRIGGERS. These issues have been fixed. ================(Build #1672 - Engineering Case #747118)================ In the MobiLink Plugin, if the MobiLink Server CommandLine was renamed and then deleted it without shutting down Sybase Central, the delete confirmation dialog would have shown the old name for the commandline instead of the new one. This has now been corrected ================(Build #1538 - Engineering Case #739173)================ In the MobiLink Server Log File Viewer, on the "Synchronizations" panel, a single synchronization could have appeared a number of times in the "Synchronizations" table. This happened when the messages from the sync were interspersed with messages from "<Main>". This has been fixed. ================(Build #1493 - Engineering Case #735993)================ The test window of the MobiLink Plug-in was not modal. This has been corrected. ================(Build #1492 - Engineering Case #735922)================ In the test window of the MobiLink Plug-in, the client log page would have continued to report “Synchronization in progress”, even when the MobiLink server did not start correctly during a synchronization and the synchronization had failed and was complete. This has been fixed. ================(Build #1480 - Engineering Case #731345)================ Sybase Central could have crashed while testing a synchronization model if a test synchronization was cancelled while rows were being fetched for the "Data" tab, and if the database server was a little slow in returning the data. This has been fixed. ================(Build #1438 - Engineering Case #731600)================ When creating a MobiLink project, or when adding a consolidated database to a project, an inappropriate error message could have been raised saying that a database connection could not be made. The problem was specific to connecting to databases using an ODBC Data Source which contained a user id, and not giving a user id in the new project and add consolidated database wizards. This has been fixed. ================(Build #1436 - Engineering Case #731308)================ When working with the MobiLink plug-in, a connection to a consolidated database is usually required. The connection is opened automatically when it is needed. If the saved connection information is no longer sufficient, the "Connect" window opens to prompt for credentials. If the "Connect" window opened as a result of testing a synchronization model, it could have opened behind a status window which was opened by the Test window. This would have prevented the entering of database credentials, and the software would subsequently have reported an internal error. This has been fixed.

MobiLink - MobiLink Agent

================(Build #1761 - Engineering Case #753095)================ The MobiLink Agent for central administration of remote databases could have stored bad character data for task results. This in turn could have caused errors during synchronization with a MobiLink server. This problem would have occurred on a host with a multi-byte character set, and a task with an “execute SQL script” command that returned a result set with multi-byte characters. This has been fixed. ================(Build #1754 - Engineering Case #753092)================ The MobiLink Agent for central administration of remote databases could have crashed when shut down. For the crash to have occurred, the agent must have executed a task that was conditional on the current network name on the device. This has been fixed. ================(Build #1464 - Engineering Case #733583)================ The MobiLink Agent for central administration of remote databases could have executed a given task ID concurrently if the task was running on a schedule and also was server-initiated. This has been fixed. Although tasks may run concurrently in general, only one instance of a given task ID should be executing at any given time.

MobiLink - MobiLink Profiler

================(Build #2247 - Engineering Case #795641)================ On Linux systems, opening help did not work if the machine used a network proxy. This has been fixed. ================(Build #1704 - Engineering Case #748972)================ When the Utilization Graph pane was enabled, if the Chart pane did not have a vertical scroll bar the Zoom To Selection menu and toolbar button might not have worked correctly, and scrolling might not have worked correctly. This has been fixed. A workaround is to disable the Utilization Graph pane, or to resize the Chart pane so that it has a scroll bar. ================(Build #1696 - Engineering Case #748429)================ With some versions of Linux and Unix, the Port field of the Connect to MobiLink Server window was not displayed correctly; it was too narrow for the port number to be visible. This has been fixed. ================(Build #1608 - Engineering Case #743688)================ If the MobiLink Profiler database (MLProfilerDB) was closed while a profiling session was active then the profiling session was ended, a Java RuntimeException internal error would have occurred. This has been fixed. Now appropriate error dialogs are displayed. ================(Build #1593 - Engineering Case #742630)================ After opening the Sample Range Properties window, some counts could have been incorrect in the Events tab for subsequent invocations of the Sample Properties or Sample Range Properties windows. This problem has been fixed. A workaround is to re-open the profiling session after opening the Sample Range Properties window. ================(Build #1540 - Engineering Case #739361)================ An error could have occurred if the View > By Remote ID option was selected when a profiling session was started. This has been fixed. A workaround is to use View > Compact View instead. ================(Build #1529 - Engineering Case #738659)================ In some cases, a profiling session opened from the profiling database may have had some changed information, compared to when the profiling session was collected. Some data may not have been saved correctly in the database, especially if the same MobiLink Profiler instance was used for multiple sessions and non-initial sessions were ended while synchronizations were active, or while recent data was still being saved in the profiling database. Even if the data in the profiling database was correct, the "in progress" and "active" sync properties would always have been false when a saved session was opened. These problems have now been fixed. To mitigate these problems, restart the Profiler between sessions, start sessions when no synchronizations are in progress, and end sessions after all synchronizations have completed. ================(Build #1474 - Engineering Case #734216)================ If a previously recorded profiling session was opened, the database connection IDs for synchronizations displayed in Synchronization properties, or the Details Table, would have be incorrect. The values saved in the database though would have been correct. This has been fixed.

MobiLink - Relay Server

================(Build #2451 - Engineering Case #806599)================ The user specifies an amount of memory for the Relay Server to use with the shared_mem option in the Relay Server configuration file, but this value is modified to account for the number of backend servers. If the newly calculated value exceeded 4GB the rshost process would crash on shutdown and could also have crashed during normal operation if the process required more than 4GB bytes of memory. This has been fixed. ================(Build #2336 - Engineering Case #801820)================ In very rare circumstances, it was possible for the Outbound Enabler to have crashed while printing an error to the log file after a network error. This has now been fixed. ================(Build #2184 - Engineering Case #789934)================ When a Relay Server was configured to serve thousands of backend servers, the Relay Server State Manager (rshost) service on Windows may have sporadically failed to startup. This has been fixed. ================(Build #2184 - Engineering Case #789930)================ The Relay Server State Manager (rshost) start-up may have reported false positive errors regarding invalid configuration found in the Relay Server configuration file. This has been fixed. ================(Build #2182 - Engineering Case #789682)================ When the response body of a http request was bigger than 65453 bytes, and the request was associated with a SAP V3 passport, the Relay Server may have sporadically delivered only the response headers but not the body. The Relay Server will report an error in this case but the type of error could vary. However the Outbound Enabler will not report an error. This has been fixed so that the entire response should relay without any error. ================(Build #2182 - Engineering Case #789680)================ The Relay Server logs SAP passport when it is associated with the request. The Relay Server client interface, as well as the server interface, logs the connection counter of the V3 passport as the last component of a compact representation of the passport. The up and down channel of server interface was logging the counter with incorrect values. This has now been fixed. ================(Build #2180 - Engineering Case #789326)================ The Relay Server for Apache did not relay all duplicate HTTP response headers received from the backend server that had the same header name, regardless of value. Only the last duplicate header that was read was sent back to the client. This has been fixed. ================(Build #2152 - Engineering Case #796808)================ Large HTTP requests going through the Relay Server Outbound Enabler could have caused a 401 unauthorized response. This has been fix. ================(Build #2132 - Engineering Case #784176)================ If the entire MAC address list reported in the Outbound Enabler log was copied directly into the MAC property of the backend server section in the Relay Server configuration, Outbound Enabler access would have been denied, and the Relay Server would have reported an RSE3000 error. This has been fixed so that the success of the authentication doesn’t depend on whether the delimiters are copied or not. A workaround is to copy only one of the MAC addresses in list without the trailing delimiting exclamation marks. ================(Build #2084 - Engineering Case #779166)================ The Relay Server for Apache may not have noticed an incorrect end of certificate comment in a client certificate. This has been fixed. ================(Build #2048 - Engineering Case #775149)================ Apache setup script could have generated duplicate lines in the <apache-install>/bin/envvars file when the script was run multiple times. The duplicate lines were generated for setting the LD_LIBRARY_PATH environment variable. This has now been fixed. ================(Build #2047 - Engineering Case #774899)================ The Relay Server State Manager (rshost) could have crashed while reporting a Relay Server configuration file error message: "RSF11020: Missing required section ‘<section-name>’ in configuration file ‘<config-file-name>’" This has been fixed. ================(Build #1961 - Engineering Case #766464)================ Host name and Relay Server version information have now been removed from the status page when accessed through the client or server extension. The information still remains available via the optional admin or monitor extensions. The admin and monitor extensions are expected to be accessible by administrators only. ================(Build #1945 - Engineering Case #765512)================ When the Outbound Enabler used a secure HTTPS connection to the backend server, if the connection to the backend server was re-used after it was recycled, it was possible for the Outbound Enabler to have crashed. This has been fixed. ================(Build #1944 - Engineering Case #765449)================ The Relay Server keeps records of statistics per type of client and there is an internal limit of 1600 types per backend server in the backend farm. When this limit was reached the Relay Server would have issued an RSF13011 error and failed the relay. This has been fixed with the following changes: - The Relay Server no longer creates new metrics until the rs_monitor.dll has been accessed. Most partners don’t distribute rs_monitor.dll. - If rs_monitor.dll has been accessed and the number of client types of a backend server has exceeded 1600, a new RSW107 warning is issued instead of the RSF13011 fatal error. - In the RSW107 situation, the Relay Server will continue to relay the traffic, but no new metrics are created for the new client type. ================(Build #1919 - Engineering Case #763863)================ Running the Relay Server IIS 7.0 quick setup script on a system without a preexisting IIS installation, and then accessing SimpleTestApp.htm through IIS which was installed by the quick setup script would have resulted in a 404.4 response. The problem was that the StaticFileModule required for the demo was not installed. This has been fixed and the message associated with the install step has been extended to explain that the installation is neither minimal nor full and users are encouraged to customize the list of features to fit their actual web server needs. ================(Build #1896 - Engineering Case #762786)================ When the RSOE used -cs https=1 and a client had disconnected before receiving all response bytes from the backend server, subsequent communications with the backend server may have suffered from false OEE1048(MLC53) SSL handshake errors. This has been fixed. The RSOE may also have suffered false OEE1048(MLC8) SSL read errors when -cs https=1 was used. This has also been fixed. ================(Build #1895 - Engineering Case #762615)================ When communications occurred between the Outbound Enabler and the backend server with the command line option -cs containing https=1, the Outbound Enabler may have crashed or reported OEE1048 with missing detail. For example: OEE1048: The communication between the Outbound Enabler and the backend server failed with [MLC24: ???] while performing secured write. sidx=0 socket=01028188 sfp=58f2ed03 A workaround to diagnose the communication error in the case where the RSOE didn’t crash is to look at the Relay Server log as the detail is sent there and reported as an OEE1048 embedded in RSE4015 with details. The crash has been fixed and the details in RSOE log have been restored. An example after the fix: OEE1048: The communication between the Outbound Enabler and the backend server failed with [MLC24: Server certificate not trusted. The system-specific error code is 336134278 (hex 14090086).] while performing secured write. sidx=0 socket=01028188 sfp=58f2ed03 ================(Build #1850 - Engineering Case #759461)================ It was possible for the Relay Server to have incorrectly reported fatal error RSF13011 (Failed allocating shared memory block for client traffic statistic collector of backend server 'XXX' in backend farm 'YYY'). After this fatal error, the Relay Server would no longer have communicated with backend server ‘XXX’ in backend farm 'YYY' until the Relay Server Host Manager was restarted. While there continue to be legitimate reasons for the RSF13011 error to be reported, the problem that would have led to the incorrect reporting of the RSF13011 error has now been fixed. ================(Build #1822 - Engineering Case #757282)================ The Outbound Enabler was taking longer than necessary to shut down. This has been fixed by removing unneeded operations and to tune for a faster shutdown response. ================(Build #1822 - Engineering Case #757265)================ The Outbound Enabler may have crashed on startup while creating an HTTPS up-channel and down-channel at the same time before using them. This has been fixed. ================(Build #1794 - Engineering Case #755249)================ When a URL query parameter contained double forward slashes, the IIS Relay Server incorrectly converted them into a single forward slash. The X-Original-Url header does correctly preserve the correct original URL and might have been used by the backend server as a workaround. This issue has been fixed. Note, this issue doesn’t happen on the Apache Relay Server. ================(Build #1786 - Engineering Case #754419)================ The Relay Server running on an Apache webserver with hybrid MPM (event or worker), could have crashed. This issue was seen in Event (hybrid) MPM. Perfork MPM did not show this problem. This has been fixed. ================(Build #1767 - Engineering Case #750638)================ If a client application had included the “Expect: 100-continue” header in the request and had passed this request through the Relay Server, it was possible for the client to have received multiple “HTTP 100 Continue” responses, which would likely have caused the client to have failed the request. This has been fixed, and only a single “HTTP 100 Continue” response will now be sent. ================(Build #1725 - Engineering Case #750493)================ On Linux systems, the Relay Server State Manager (rshost) process could have failed to start under the following conditions: - the -o option was not specified, AND - if any of the environment variables $TMPDIR, $TMP, $TEMP was defined but the directory did not exist. Depending on how rshost was started, it could have failed silently. This has been fixed. ================(Build #1696 - Engineering Case #748414)================ Relay Server monitoring provided via the SQL Anywhere Monitor may have suffered stack overflow exceptions on the Java data collection client. This is now fixed. ================(Build #1638 - Engineering Case #745191)================ The Outbound Enabler has a fixed limit of 1000 active connections with the backend server per Relay Server. The Outbound Enabler would have crashed when the limit was exceeded. This has been fixed by relaxing the internal limit to 32768 active connections. An OEE1051 error is given when that limit is now exceeded. ================(Build #1600 - Engineering Case #743046)================ When the Apache httpd shutdown raced ahead of the Outbound Enabler shutdown, the Up channel may never have been gracefully shutdown, as Apache terminates the worker process non-gracefully. This could in turn could have caused the Relay Server State Manager (rshost) to leak System V semaphores on shutdown. The “ipcs –s” command can be used to review System V semaphore being used. This has been fixed by eliminating the latency on Up channel shutdown so that the race condition is much less likely to happen. This change is not a complete solution, but it reduces the possibility of this problem occurring. ================(Build #1584 - Engineering Case #740747)================ The Apache Relay Server did not respect the client's application timeout header (IAS-RS-App-Timeout-Minute). If the client's application timeout header value eas smaller than Apache's 'Timeout' directive, the Apache Relay Server would have taken longer to timeout the client's request, upto Apache's Timeout directive. This has been fixed. ================(Build #1581 - Engineering Case #741408)================ Relay Server for Apache correctly sets the HTTP status code on the client's HTTP response, however, it didn't return that same HTTP status code back to the Apache webserver. This caused the wrong HTTP status code to be printed in Apache's access_log. This has now been fixed. ================(Build #1570 - Engineering Case #741205)================ The Relay Server and the Outbound Enabler were not designed for clients that did not maintain affinity isolation. The Outbound Enabler has been incrementally patched to fulfill such a need. The new Relay Server liberation option (socket_level_affinity=no) is an efficient way to have the Relay Server transform the traffic so that the relay mechanism will no longer be exposed to unintended use. In other words, the net effect is to support the previously unintended use with the liberation option. Liberation is not turned on by default and some customers may be willing to upgrade their Outbound Enabler, but not the Relay Server in the DMZ. So testing of such unintended use without liberation has been increased, and yet another case where such unintended use may still fail without using liberation has been found. This change is to fix the Outbound Enabler to deal with such a case. A workaround is to use liberation by explicitly setting socket_level_affinity=no. ================(Build #1559 - Engineering Case #740559)================ The affinity information injected by the Relay Server carries information for addressing the socket opened from the Outbound Enabler to the backend server. This was required for end-to-end persistent connections between Client-RS and OE-Backend, while the shared RS-OE connection is always persistent. Such socket level affinity calls for affinity information isolation per socket on the client side. This proprietary isolation requirement was found to be too restrictive. Partners have been releasing their clients or utilizing third party client software in solutions using the Relay Server where the isolation requirement has not been met. Not all development environments or third party clients can support the implementation of the isolation. A change has now been made to introduce an optional relaxation from the Relay Server so that it will reduce the level of addressing information to backend server level instead of socket level. Persistent connection between Client-RS can still be maintained as the Relay Server will work with the Outbound Enabler to transform the OE-Backend accesses into non-persistent transient access. The net result is a liberation on developing integration between the client and the Relay Server. A new backend_farm property called socket_level_affinity has been added for controlling the behavior in a per backend farm manner. The liberation described above is DISABLED by default (i.e. socket_level_affinity=yes is the default). Online configuration update of this property is supported. External Requirements Updating to this new Relay Server, with socket_level_affinity=no, doesn’t require deploying a client to clear previous affinity cookie assigned by the Relay Server. Also, the Outbound Enabler doesn’t need to be upgraded in order to enjoy the liberation. This is an internal behavior change which continues to require the backend servers to allow non-persistent HTTP traffic and/or broken up persistent HTTP traffic. This backend server requirement remains regardless of whether the liberation is used or not. If liberation is enabled, the backend server doesn’t need to support or allow persistent HTTP connection. ================(Build #1559 - Engineering Case #740486)================ Clients are not expected to use expired affinity, however not all client development can support proprietary expiry defined by the Relay Server. For that reason, the Outbound Enabler was relaxed to let in new requests with expired affinity. This relaxation was found to be incomplete. Under certain access sequence, a POST request may still suffer errors like the following: RS16: RSE4015: Outbound enabler of backend server 'S0' in backend farm 'RSTEST02.F0' reports session error OEE_SESSION_ACCESS_FAILED(1051) with parameters 'RS_CLI_REQUEST_CONTINUE', 'disconnected at the middle of a request', '_unused_' OE16: OEE1051: The Outbound Enabler was unable to access the session with ridx=0 sidx=0 snum=0000 sfp=01aa9daa on a RS_CLI_REQUEST_CONTINUE packet due to disconnected at the middle of a request RS12: RSE4004: Outbound enabler of backend server 'S0' in backend farm 'MLVM-SARSX64.F0' reports session error OEE25100 with parameters '_unused_', '_unused_', '_unused_' OE12: Session was disconnected at the middle of a packet sequence. Aborting sidx=0 The problematic sequence has been identified and a fix has been made so that the OE can handle the POST that falls into that sequence. If the traffic is RESTful, a user workaround is to turn off all affinity injection from the Relay Server using the active_cookie=no and active_header=no property in the backend_farm configuration section of the affected backend farm and clear HTTP cookies from the client after the Relay Server configuration has been updated. ================(Build #1558 - Engineering Case #740440)================ The Relay Server would have responsed with "HTTP 200 OK" to requests that didn’t carry User-Agent headers, without actually performing the relay. The Relay Server uses the User-Agent header (or alternatively the IAS-RS-User-Agent header) to group metrics for aggregated statistics. A fix has been made so that requests that don’t carry User-Agent or IAS-RS-User-Agent headers are now processed, and their metrics are collected under the group with User-Agent “_unknown_”. This problem was reported when using the Relay Server with Windows 8 Store app which accesses NetWeaver Gateway OData services via the Relay Server and SUP 2.2. ================(Build #1555 - Engineering Case #740328)================ The Relay Server would have incorrectly stopped relaying and reported the error RSE4008 with “Malformed HTTP chunk” when it encountered chunked server responses with valid extensions or trailers (see RFC2616 section 3.6.1). This has been fixed. Logging of the extensions and trailers were added for log level 4 and higher. ================(Build #1554 - Engineering Case #738201)================ When attempting a download resumption through the Relay Server to a MobiLink server farm, the client could have connected to the wrong MobiLink server, resulting in the download resumption attempt failing. The client weren’t persisting the HTTP cookies that the Relay Server requires to match it back up with the correct MobiLink server. The cookies are now stored with the rest of the restart state. ================(Build #1547 - Engineering Case #739810)================ During quick setup, users are prompted by the following question: Expect Afaria clients (y/N)? If the answer is yes, the expectation is that the quick setup script will configure IIS to turn off request buffering, due to the incompatibility between IIS and Afaria client regarding how the size of the entity body of the HTTP request was specified. The IIS7 version of the quick setup script was failing to turn off the buffering. This problem has been fixed. ================(Build #1530 - Engineering Case #738786)================ The integrated Relay Server Outbound Enabler (RSOE)could have missed processing a socket close event, leading to leaked memory. This problem was rare and random and only occurred in a time-sensitive scenario where the integrated RSOE was sending data to the backend server and received a notification from the Relay Server to close that particular socket connection. This has been fixed. ================(Build #1529 - Engineering Case #736239)================ When the MobiLink client held on to old affinity information across Outbound Enabler restarts, and used it on new requests, the Outbound Enabler would have failed the relay (reported as OEE1051 error on version 16). The Outbound Enabler has been fixed to allow the traffic to go through instead. ================(Build #1527 - Engineering Case #738782)================ The integrated Relay Server Outbound Enabler (RSOE) library could have crashed the Mobilink server in the following cases: 1- In an error case, such as a failure to send data to the backend 2- In a timing sensitive situation, while the RSOE was sending data to the backend and receiving a notification from the RelayServer to disconnect a socket (due to client dropping the connection or aborting for any reason before completing a clean send/receive HTTP cycle). This has been fixed. ================(Build #1503 - Engineering Case #736717)================ The Relay Server may have reported that the shared memory manager was in an unhealthy state when under heavy load. This has been fixed. ================(Build #1503 - Engineering Case #736716)================ By design, the Relay Server Outbound Enabler treats client disconnect as an immediate cancel of the request if it is happening at the middle of the HTTP request/response cycle. Therefore, uploads could have been truncated by the immediate cancelling, if the client disconnect arrived while the upload was queued up. This has now been changed so to make sure all request bytes that came before the disconnect are not cancelled by the client disconnect. Also, the RSOE no longer views this situation as an abnormal behavior of an HTTP client. ================(Build #1500 - Engineering Case #736791)================ In the case where users turn on HTTP backend status detection in the Relay Server Outbound Enabler (by providing a url_status parameter in the -cs optiobn), it was possible for the RSOE to fail to identify if the backend was available or not. This has now been fixed by making the RSOE more tolerant to space characters when parsing the HTTP response provided by the backend server. ================(Build #1493 - Engineering Case #735998)================ The Relay Server Outbound Enabler may have restarted unnecessarily in rare situation, resulting in repeated RSE3003 error for a duration as long as the OE-RS liveness timeout period. One example situation would have been when a Relay Server was removed from the farm and then being added back after. This has been fixed. ================(Build #1479 - Engineering Case #734841)================ The Relay Server component may have silently missed error messages with old versions of the language resource. This has been fixed by adding a generic error message indicating the resource library is too old. ================(Build #1474 - Engineering Case #734315)================ In some cases, the Relay Server Outbound Enabler could have failed to startup without giving a specific startup error, but rather a general initialization error. This had been seen specifically in the integrated RSOE case. This has now been fixed in order to better help resolve startup issues. ================(Build #1472 - Engineering Case #734068)================ A MobiLink Server with the integrated Relay Server Outbound Enabler could have crashed on shutdown. This has now been fixed. ================(Build #1463 - Engineering Case #733471)================ The Outbound Enabler was performing unnecessary operations when an internal restart was caused by an up-channel failure. This change eliminates the unnecessary operations, so recovery time, and clarity in logged operations, are improved. ================(Build #1453 - Engineering Case #733171)================ The Relay Server provided no option to inject the X-Original-URL header. This has been fixed by injecting the header whenever the original request didn’t contain such a header. The injected header value is URL-encoded. ================(Build #1453 - Engineering Case #732975)================ The Apache Quick Setup script contained some bash specific syntax that caused errors when run on Linux systems running Ubuntu. Ubuntu uses dash, not bash, as the default shell interpreter. This has been fixed. ================(Build #1453 - Engineering Case #732959)================ A pinpointed status page may have mistakenly report that the server was not found when IAS-RS-SERVER was not the last parameter in the URL query. This has been corrected. ================(Build #1423 - Engineering Case #730134)================ When a relay error occured early enough, there may not have been enough information to calculate the relay KPI. The result was a wrong KPI value shown in the Relay Server record. This has been fixed by replacing the wrong value with zero when the KPI cannot be calculated due to a failure. The occurrence of the error is already recorded in the same RSR. ================(Build #1423 - Engineering Case #730094)================ When the Relay Server encountered an invalid SAP Passport, it would have reported an error in English without providing an error code, while continuing to relay the request. This has been fixed by replacing the error with a localized warning RSW104 indicating that an invalid passport has been ignored. ================(Build #1419 - Engineering Case #729894)================ The error ID and error name column in the Relay Server Record did not capture RSE4008 and RSE4016. This has been fixed. ================(Build #1419 - Engineering Case #729873)================ The Relay Server automatically sends down instruction to the client to expire the affinity cookie when the backend server response code falls into the error range, except for 401 and 407 authentication challenges. Debug information of this expiring activity was not available in the Relay Server log at any verbosity level. This fix is to add a message at verbosity 4 and above for this activity. ================(Build #1419 - Engineering Case #729871)================ The Relay Server was considering server responses that contained headers with empty values as malformed, and was converting the responses to a '400 Bad Request' response. This fix is to relax this case and relay the response without changing it or raising an error. ================(Build #1419 - Engineering Case #729869)================ The Relay Server converted server responses containing malformed headers into a '400 Bad Request' response without logging an error. This fix added a new RSE_CLIENT_RESPONSE_HEADER_ERR(4016) error when this now happens.

MobiLink - SA Client

================(Build #2746 - Engineering Case #816881)================ The dbmlsync or dbremote process would have a delay and could block engine processing during startup. This has been fixed. ================(Build #2745 - Engineering Case #816882)================ If you had executed the SYNCHRONIZE START command to pre-start the dbmlsync process running in server mode, if you performed a schema change on the remote database that affects any of the database objects involved in synchronization, it was possible for subsequent SYNCHRONIZE commands to have failed until a SYNCHRONIZE STOP was executed. This has been fixed. ================(Build #2315 - Engineering Case #800505)================ If the MobiLink client (dbmlsync) had been configured to show upload/download row values in the dbmlsync log, it was possible for dbmlsync to have crashed. This has now been fixed. A workaround is to reduce the verbosity of the dbmlsync log. ================(Build #2306 - Engineering Case #800025)================ The MobiLink client utility (dbmlsync) prints the communication parameters used to connect to the MobiLink Server, and this string could have contained password in the identity_password, http_password or http_proxy_password parameters. When dbmlsync printed the synchronization profile options, the MobiLink password would also have been printed, even if "-vp" was not specified. These issues have now been fixed. ================(Build #2245 - Engineering Case #790558)================ If a SQL Remote or Dbmlsync hook procedure had been owned by dbo, they would not have been found by the log scanning tool, and thus would not have been called during replication or synchronization. This has now been fixed. ================(Build #2214 - Engineering Case #792594)================ If the SQL Anywhere MobiLink Client had to scan a large number of blobs from the transaction log, it could have been slow. The performance of the log scanning code when scanning blobs has been improved, although the benefits of this change are highly dependent on the available memory and processor power of the machine, as well as the blobs themselves. ================(Build #2123 - Engineering Case #783355)================ When using dbmlsync through the dbmlsync API a download file created during one synchronization could have intermittently been deleted during the following synchronization. The following steps led to the deletion: - Run a synchronization using the –bc command line option or the CreateDnldFile option. This synchronization will create a download file. - Immediately after that run a synchronization with invalid command line options. This synchronization will fail and when it fails it will delete the download file created in the previous step. This has now been fixed. ================(Build #1704 - Engineering Case #748548)================ If a database contained subscriptions for more than one MobiLink user, and at least one of those users had a name containing non-alphanumeric characters, then it was possible for synchronizations to fail and generate incorrect error messages. The messages generated may have included, “There is no synchronization subscription for user ? to publication ?” or “Communication protocol mismatch. Unable to negotiate an appropriate communication protocol with the MobiLink server.” Other error messages were likely also possible. This has been fixed. Having non-alphanumeric characters in MobiLink user names should now be handled correctly. ================(Build #1606 - Engineering Case #743027)================ HTTP Basic authentication in persistent HTTP synchronizations could have reported the error: -1305: MobiLink communication error -- code: 216. This has been fixed. Note, this fix also applies to UltraLite and UltraLiteJ for Android. ================(Build #1583 - Engineering Case #740879)================ If a SQL Anywhere MobiLink client database had been rebuilt using the Unload utility (dbunload), and it previously had been upgraded using the Ugrade utility (dbupgrad) or using the ALTER DATABASE UPGRADE command, then subsequent synchronizations could have resulted in dbmlsync sending up the wrong schema definition to the MobiLink Server, or it could have resulted in a crash of the dbmlsync process. This can be worked around by dropping and re-creating all the SYNCHRONIZATION SUBSCRIPTIONS in the remote database after the rebuild or upgrade. This issue has now been resolved. ================(Build #1560 - Engineering Case #740636)================ The MobiLink user password and new password could have been shown in MobiLink server log files in plain text. This would have occured if the password and new password, as named-parameters, were referenced in any user authentication scripts, and the MobiLink server was running with the –vc command line option. This as been corrected. Now the MobiLink server will replace the password and new password with asterisks "*", and then log them. ================(Build #1405 - Engineering Case #728446)================ When the error message “SQL statement failed: (-782) Cannot register 'sybase.asa.dbmlsync' since another exclusive instance is running” was generated and the database character set of the remote database was different from the OS character set, the message would be displayed in the wrong character set and may have been unreadable. This problem affected only this error message, and has now been fixed. ================(Build #1383 - Engineering Case #726952)================ The MobiLink Client would not have reported error messages generated by the MobiLink server for a synchronization where progress offsets were checked against the server values at the beginning of the synchronization and found to be different from the server side values. This has been fixed.

MobiLink - Sample

================(Build #1809 - Engineering Case #753866)================ The JRE location used in the SIS_SimpleListener sample was out of dated and may have caused the listener to fail actions requiring JRE if the search path didn’t already cover any JRE locations. The JRE location used in the SIS_SimpleListener sample has now been brought up to date.

MobiLink - Streams

================(Build #2322 - Engineering Case #800928)================ There was a potential security vulnerability with MobiLink clients and the Relay Server Outbound Enabler when synchronizing through HTTP proxies. This has been fixed. ================(Build #2170 - Engineering Case #788219)================ It was possible that when a synchronization with HTTP or HTTPS failed, a duplicate HTTP request could have been sent to the server. This would most likely have lead to a sync failure, but there was a small chance that this could cause data corruption. This has now been fixed. ================(Build #2133 - Engineering Case #784330)================ If HTTP or HTTPS was being used for synchronization, and a new MobiLink synchronization request was sent to a socket on which a different synchronization had already taken place or on which a synchronization was currently active, the MobiLink Server could have reported an error indicating the ml-session-id had changed or could have disconnected the active synchronization. This has now been fixed and the MobiLink Server allows for new HTTP synchronizations to arrive on the same socket as a previous or active synchronization. ================(Build #2132 - Engineering Case #784250)================ A large HTTP or HTTPS synchronization through the Relay Server could have failed with STREAM_ERROR_HTTP_HEADER_PARSE_ERROR (error code 216). This would have occurred when the web server sent back a “204 No Content” response that contained a zero length chunk in the body that was unexpected. This has been fixed. ================(Build #2130 - Engineering Case #783874)================ Memory was being leaked each time a synchronization was performed using TLS or HTTPS. This has been fixed. ================(Build #1844 - Engineering Case #759243)================ In some situations, it was possible for an HTTPS synchronization to fail, though no actual stream error code would have been reported. This has been fixed. ================(Build #1497 - Engineering Case #734839)================ In rare circumstances, a MobiLink client using HTTP would have ignored bytes sent down by the MobiLink server during a download and requested that the MobiLink server resend them. This has been fixed.

MobiLink - Synchronization Server

================(Build #2770 - Engineering Case #817457)================ If the MobiLink Server had been generating the download stream and a hard shutdown of the Mobilink Server was requested, the download would be aborted, but would not be rolled back. The COMMIT of the end_synchronization transaction would then incorrectly COMMIT any changes that had been made in the download transaction. This has now been fixed, and the download transaction is rolled back when a hard shutdown is requested. ================(Build #2769 - Engineering Case #817456)================ If the MobiLink Server had rejected a number of non-persistent HTTP or HTTPS synchronizations because the maximum number of concurrent active synchronizations allowed exceeded the value specified by the -sm switch, it was possible for the rejected synchronization to have remained active in the MobiLink Server, but in a state where the synchronization could not proceed. This could eventually lead to a situation where all active synchronizations allowed in the MobiLink Server would be active, but rejected and unable to proceed, preventing the MobiLink Server from accepting new incoming synchronizations. This has now been fixed. ================(Build #2748 - Engineering Case #816994)================ If user defined .NET code had been executing in the MobiLink Server to populate the download_cursor or download_delete_cursor and the value being bound to a particular column had been out of range for the data type, an unhelpful error message would have been printed to the MobiLink log similar to "[-10225] User exception: Parameter 1 7 is out of range for conversion: SystemException". The error message has been improved and now reads similar to "[-10225] User exception: Parameter for column #2 is out of range for conversion to data type integer: SystemException". ================(Build #2734 - Engineering Case #816600)================ Several server-side issues with restartable downloads were fixed: 1) There was an undocumented limit of 200 stored downloads; this has been removed 2) If a download failed to be generated, a restart request for that download would fail with error -10255, "Unable to start the restartable synchronization". The remote will now get the error from the failed download 3) If the server received a restart request, but the download hadn’t yet been generated and the download was larger than the -ds size, the restart request would not be given the download and would instead be hung forever. It will now receive the download. ================(Build #2717 - Engineering Case #816132)================ MobiLink could fail synchronization unnecessarily when running against IQ and there are multiple active sessions from the same remote. This is fixed: now the server will wait as usual for pending synchronizations to clear. ================(Build #2703 - Engineering Case #815883)================ Additional logging was added to the -vp switch. Changes were made to the undocumented _log_all=1 stream option. Some output that was printed at level one is now printed at level2, and there is additional logging at level 1. ================(Build #2702 - Engineering Case #815392)================ The MobiLink Server could crash if -wn was greater than 1 and restartable downloads could be kept longer than they should, or could be kept not long enough. This has been fixed. ================(Build #2691 - Engineering Case #814979)================ The MobiLink server could crash when using HTTP. This has been fixed. ================(Build #2669 - Engineering Case #814465)================ Additional status check diagnostic logging has been added to the -vp ML server log output. ================(Build #2542 - Engineering Case #810596)================ The MobiLink server could hang. This has been fixed. ================(Build #2533 - Engineering Case #810246)================ The MobiLink server could crash with -wn greater than 1. This has been fixed. ================(Build #2488 - Engineering Case #808460)================ The MobiLink server was doing an unnecessary network flush during restartable downloads. This has been fixed. ================(Build #2462 - Engineering Case #807001)================ The MobiLink server now gives a stricter set of HTTP cache control headers. This should prevent more HTTP intermediaries from caching MobiLink HTTP requests. ================(Build #2324 - Engineering Case #801033)================ Requests could have failed with an internal stream error when using http. This has been fixed. ================(Build #2260 - Engineering Case #796694)================ The MobiLink server could have crashed when using restartable downloads with the –wn option set to be greater than 1. This has been fixed. ================(Build #2252 - Engineering Case #796136)================ The MobiLink server could have crashed when using HTTPS with –wn set to be greater than 1. ================(Build #2244 - Engineering Case #795422)================ Clients could have crashed the MobiLink server after successfully authenticating. This has been fixed. ================(Build #2234 - Engineering Case #795574)================ A number of problems with restartable downloads have been fixed: - The sync server could have crashed - The sync server could have reported an error instead of waiting if the sync being restarted had not yet finished - Download restarts were unnecessarily slow - If a remote sent more than one restart request for its download, the last one sent would sometimes fail because the server processed the last one received, which may have been different from the last one sent - It was possible to store more restartable download data than specified with the –ds switch - Failed, restartable syncs waiting for a resumption request would have appeared stuck in the sending download phase of the MobiLink Profiler ================(Build #2234 - Engineering Case #794717)================ There were a number of problems with restartable downloads: - the MobiLink server could have crashed - the MobiLink server could have reported an error instead of waiting, if the sync being resumed hadn’t yet finished - download resumption was unnecessarily slow - if a remote sent more than one restart request for its download, the last one sent would sometimes fail because the server processed the last one received, which may have been different from the last one sent - it was possible to store more resumable download data than specified with the –ds switch - failed, resumable syncs waiting for a resumption request would appear stuck in the sending download phase of the MobiLink Profiler These issues have now been fixed. ================(Build #2213 - Engineering Case #792866)================ The MobiLink server could have crashed. This has been fixed. ================(Build #2212 - Engineering Case #792597)================ A file I/O error during a file transfer upload could have been reported as protocol error 400. This has been fixed. ================(Build #2178 - Engineering Case #789932)================ UltraLite clients could get into a state where every sync would fail with error -10400: “Invalid sync sequence ID for remote ID”. This has been fixed. ================(Build #2167 - Engineering Case #787658)================ The MobiLink server could have crashed when using HTTP. This has now been fixed. ================(Build #2144 - Engineering Case #785534)================ If an attempt was made to get a bit, tinyint or decimal data type from an IDataReader from the UploadData object, a System.InvalidCastExecption error would have been thrown. This has now been fixed. ================(Build #2144 - Engineering Case #785533)================ It was possible that when attempting to get a GUID data type from a DBRowReader, a System.FormatExecption exception could have been thrown, even though there was no issue with the format of the GUID. This issue has now been fixed. ================(Build #2144 - Engineering Case #785453)================ There were two problems when gathering integer values from a DBRowReader from the MobiLink .NET API. - If an attempt was made to get an unsigned smallint, integer or bigint from a DBRowReader, a System.OverflowException would have been thrown if the value was greater than the maximum value for the signed version of the data type. - If an attempt was made to get a tinyint from a DBRowReader, a System.InvalidCastExpection would have been thrown. Both these issues have been fixed. ================(Build #2144 - Engineering Case #751840)================ If the machine where the MobiLink Server was running had a localized setting such that the decimal separator was not a period (for example, a comma), there were a number of problems when the MobiLink .NET API was used to synchronize data. - Attempting to get a decimal data type from an IDataReader from the UploadData object could have resulted in a System.FormatExecption error - Attempting to get a real, double or decimal data type from a DBRowReader could have resulted in a System.FormatExecption error - Attempting to use a real or double data type in a DBParameter added to a DBCommand could have resulted in an error indicating that the value could not be converted to a real or double These problems have now been fixed. ================(Build #2090 - Engineering Case #772417)================ The MobiLink server could have crashed. This has been fixed. ================(Build #2019 - Engineering Case #771541)================ Attempting to have the MobiLink server bind a virtual IP address while specifying the “host” stream parameter, would have resulted in error -10259 “Network address '<host>' is not local” on some platforms. This has been corrected so the server will print warning 10126 “'<host>' might not be a local address”, instead. ================(Build #2017 - Engineering Case #771300)================ Synchronizations over HTTP could have failed if the command line option –wn was greater than one. This has been fixed. ================(Build #2015 - Engineering Case #769672)================ The MobiLink server log files created on Unix systems did not give read permission to anyone except for the user who has created the files. This has not been corrected. Now the MobiLink server log files will have read permission set for the group and other users as well. Note, this change will apply to dbremote and dbmlsync logs as well. ================(Build #1994 - Engineering Case #768379)================ The MobiLink server with integrated Outbound Enabler could have crashed. This has been fixed. ================(Build #1932 - Engineering Case #764027)================ When doing a ping operation using the MobiLink client (dbmlsync), the MobiLink server would have reported the following error: [-10410] The client failed to send a complete sequence of commands Ping request failed Other than the error message, the ping operation would have behaved correctly. This has been fixed so that the warning is now prevented from being issued. ================(Build #1901 - Engineering Case #762941)================ Numeric data from columns with a FLOAT(precision) type used in a remote database might not have been uploaded into a HANA consolidated database correctly when the Precision was less than 25 and the HANA server revision number was greater than 66. This problem has now been fixed. A workaround for this problem is to start the MobiLink server with the hidden option, -hwg- ================(Build #1789 - Engineering Case #754972)================ Under very rare circumstances, the server may have crashed while sending describe queries though from Java applications. This has now been corrected. ================(Build #1727 - Engineering Case #750713)================ When trying to start an evaluation edition of the MobiLink server, it would fail with the following error: [-10382] The MobiLink Server has failed to start This problem has now been fixed. ================(Build #1712 - Engineering Case #749407)================ If the connection between the MobiLink Notifier and the consolidated database had been disconnected, it was possible for the MobiLink Notifier to have failed to re-connect to the consolidated database. This has now been fixed. A work around to the issue is to restart the MobiLink Server. ================(Build #1699 - Engineering Case #748725)================ When MobiLink uploaded a UUID (GUID) to SAP HANA, hyphens would have been added to the resulting string, resulting in a VARCHAR(40) field instead of a VARCHAR(36). This change removes the hyphens from UUIDs so that UUID columns can now be stored as a VARCHAR(36) in SAP HANA, which matches the SAP HANA default UUID format. ================(Build #1691 - Engineering Case #748531)================ Synchronizations could have spuriously failed if the MobiLink server was configured to use end-to-end encryption, but the client was not. This has been fixed. ================(Build #1674 - Engineering Case #747231)================ The default query to fetch the next 'last download' timestamp was slow when using a HANA consolidated. This has been fixed. The server now caches the results of this query, so its value can be up to 8 seconds out of date. This can result in extra rows being downloaded. ================(Build #1667 - Engineering Case #747227)================ The MobiLink Server leaked memory when using secure streams on Mac OS X systems. This has now been fixed. ================(Build #1656 - Engineering Case #745198)================ The ODBC driver could have exhibited inconsistent behavior when calling a stored procedure with blob parameters. The problem only occurred with data-at-execution-time blob parameters. Blobs of type Varchar worked correctly, but blobs of type binary did not. The problem has now been fixed. ================(Build #1646 - Engineering Case #745646)================ Downloads could have been incorrectly skipped in persistent connections. This has been fixed. ================(Build #1607 - Engineering Case #742161)================ The conditions for a restartable download to be available have been improved. This only applies to UltraLite clients at this time that have been upgraded to build 1584. IMPORTANT: These newer UltraLite clients may in rare cases cause older 16.0 MobiLink servers (prior to build 1584) to crash due to a previously undetected bug. ================(Build #1514 - Engineering Case #737614)================ If a native thread in the Java VM launched by the MobiLink Server printed to stdout, it was redirected to the MobiLink log file. If the MobiLink Server was ready to accept synchronizations when the Java VM attempted to print to stdout, it was possible that the Java VM could have crashed after printing the string, which would also have crashed the MobiLink Server. A work around for this issue is to start the Java VM with the -XtraceFile option, which will redirect the Java VM stdout to a file instead of the MobiLink log file. The issue has now been fixed, and the way the Java VM stdout strings are written to the MobiLink Log has changed. Instead of being posted as errors to the log with error number -10133, the output is now informational and has "(JVM): " at the start of the string to identify the source of the string. ================(Build #1498 - Engineering Case #736044)================ The MobiLink server may have crashed when the MobiLink client connects via HTTPS via a proxy. Although the likelihood of the crash was extremely low, it has been corrected. ================(Build #1479 - Engineering Case #734077)================ Synchronizations with large downloads could have been slowed by up to the liveness timeout when using HTTP, if a network interruption occurred. This has been fixed. ================(Build #1432 - Engineering Case #731014)================ If the MobiLink server had been started with a command line in which the maximum number of concurrent database worker threads (-wm option) was a value less than the initial number of concurrent database worker threads (-w option, default value 5), then the MobiLink Server would have failed to start. The MobiLink Server will now print a warning to the MobiLink Server log indicating that it has reduced the initial number of concurrent database worker threads to the maximum number of concurrent database worker threads that was specified on the command line. ================(Build #1423 - Engineering Case #730271)================ Syncs could have failed when using HTTP and using an HTTP intermediary that was setting a “Connection: Keep-alive” header, but was actually creating a new connection for each HTTP request. This has now been corrected. ================(Build #1423 - Engineering Case #729427)================ The MobiLink server could have crashed when using HTTP and a misconfigured HTTP proxy. The server now reports an error and kills the synchronization when this occurs.

MobiLink - Utilities

================(Build #2193 - Engineering Case #790656)================ An MLReplay memory overwrite has been fixed. ================(Build #1813 - Engineering Case #756745)================ Simulating the time in between synchronizations when replaying a persistent connection could have caused the synchronization to timeout. This has been fixed.

MobiLink - iAS Branded ODBC Drivers

================(Build #2222 - Engineering Case #793794)================ In some circumstances, retrieving a query result set from an Oracle database through the SQLA ODBC driver could have been slow, especially for tables with a small row width, because the ODBC driver fetched only 20 rows from the database server each time. In order to make the fetching row size more flexible, a new DSN configuration parameter, “Fetch array size (rows)” has been introduced. This parameter can be set from the “Configuration for SQL Anywhere driver for Oracle” dialog box on Windows, or using the new DSN entry, FetchArraySize=xxx on UNIX. The default value for this new parameter is 20 and the default value will be used if the setting for this parameter is not specified or if this parameter is set to be zero. Increasing the “Fetch array size” reduces the number of round trips on the network, thereby increasing performance. For example, if your application normally fetches 100 rows, it is more efficient for the driver to fetch 100 rows at one time over the network than to fetch 20 rows at a time during five round trips over the network. However, increasing the “Fetch array size” will also increase the memory usage by the ODBC driver. ================(Build #2181 - Engineering Case #789321)================ The output data from stored procedure calls could have been truncated by the SQL Anywhere ODBC driver for Oracle, if the SQL_C_WCHAR data type was used when binding the INPUT_OUTPUT or OUTPUT parameters, and the Oracle OCI library, version 12.1.0.2.0 was used. This problem is now fixed.

MobiLink - scripts

================(Build #1725 - Engineering Case #749081)================ The HANA server would have complained with the following error: "unsupported function included: CURRENT_TIMESTAMP is not supported by generated always" when trying to run the MobiLink server setup script file, synchana.sql against a database running on a HANA server, if the HANA server revision was 1.00.64 or later. This problem has now been fixed.

SQL Anywhere - ADO.Net Managed Provider

================(Build #2642 - Engineering Case #813352)================ Using the SQL Anywhere .NET Data Provider SetupVSPackage installer with the options "/install /v ef6" silently installs the version 4.5 provider instead of the EF6 provider. A case-sensitive comparison is done for the string "EF6". So "/v EF6" will work. This problem has been fixed. The comparison is now case-insensitive. Also, the installer will now list the modifications it makes to the machine.config files when an "install" option is used. ================(Build #2636 - Engineering Case #812887)================ Using the SQL Anywhere .NET Data Provider with Entity Framework, an error is reported when trying to use certain canonical functions. Generally, the error is caused because the function is not implemented. In other cases, the implementation is incorrect. The following functions have been corrected, added, or removed. Round() can now take a second argument which is the number of precision digits. Truncate() can now take a second argument which is the number of precision digits. The use of Truncate() caused a syntax error and has been reimplemented so that it generates a call to the TRUNCNUM system procedure. Abs() has been added. Contains(), Startswith(), and EndsWith() have been added. Millisecond() has been added. DayOfYear() has been added. CurrentDateTime(), CurrentUtcDateTime(), and CurrentDateTimeOffset() have been added. GetTotalOffsetMinutes() has been added. TruncateTime() has been added. CreateDateTime(), CreateDateTimeOffset(), and CreateTime() have been added. AddNanoseconds() and DiffNanoseconds() have been removed since they are not supported. The datepart keywords have been revised to those supported by the database server. Dateparts caldayofweek, cdw, calweekofyear, cwk, calyearofweek, cyr are now supported. Dateparts microsecond, mcs, us, tzoffset, tz are now supported. Dateparts d, m, n, q, s, ww, y, yyyy have been removed as they are not supported. These changes improve Entity Framework support. Here is an example that uses some of these functions. var dataset2 = query .OrderBy(y => y.Name) .Select(y => new { B_Name = y.Name ,B_ID = y.BlogId ,B_Url = y.Url ,B_Date = y.CreatedDate ,B_P1 = y.Name.StartsWith("C") ,B_P2 = y.Name.EndsWith("1") ,B_P3 = y.Name.Contains("d") } ) .ToList(); A work-around may be possible in some circumstances by implementing a function stored procedure of the same name. Here is an example for CreateDateTimeOffset. CREATE OR REPLACE FUNCTION dbo.CreateDateTimeOffset(yy int, mm int, dd int, hh int, nn int, ss double, tzo int) RETURNS DATETIMEOFFSET BEGIN RETURN TODATETIMEOFFSET(DATEADD(microsecond,ss*1000000,DATEADD (second,3600*hh+60*nn,YMD(yy,mm,dd))),tzo); END; GRANT EXECUTE ON dbo.CreateDateTimeOffset to PUBLIC; ================(Build #2632 - Engineering Case #812885)================ Using the SQL Anywhere .NET Data Provider with Entity Framework, an error occurs in data query expressions using TimeSpan values. The following is an example containing a "where" clause involving a TimeSpan and a TIME database data type. var query = from b in db.Blogs where System.Data.Entity.DbFunctions.CreateTime(12, 34, 56.789 ) == b.ts orderby b.Name, b.BlogId select b; This problem has been fixed. ================(Build #2618 - Engineering Case #812382)================ The unmanaged code portion of the SQL Anywhere ADO.NET provider is contained in a DLL that is unloaded by the provider into a directory and subsequently loaded from there into memory. In some situations, this action violates system security policies. To accommodate this, the load procedure for the unmanaged code DLL (dbdata17.dll or dbdata16.dll) has been changed as follows: 1. The provider looks for the dbdata DLL in .NET application's directory. If the DLL is found, then it is loaded and a version check is done. If the DLL version matches the ADO.NET provider version, then the application is launched. Otherwise, the next step is performed. 2. The provider looks for the dbdata DLL in the ADO.NET provider's directory (this directory could be different than the application directory). If the DLL is found, then it is loaded and a version check is done. If the DLL version matches the ADO.NET provider version, then the application is launched. Otherwise, the next step is performed. 3. The provider looks for the dbdata DLL in the "temp" directory as described in the documentation. It starts with the directory at index 1 (for example, {16AA8FB8-4A98-4757-B7A5-0FF22C0A6E33}_1708.x64_1). If the DLL is found, then it is loaded and a version check is done. If the DLL version matches the ADO.NET provider version, then the application is launched. Otherwise, if the DLL was found but the version was wrong, an attempt is made to delete it. If this succeeds then a new DLL is unpacked into the directory. Otherwise, the next directory (index 2, 3, etc.) is searched repeating step 3. See http://dcx.sap.com/index.html#sqla170/en/html/3bcf66b76c5f1014b219867750fa0899.html for more information on how the dbdata DLL is handled. Step 3 is very similar to the previous behavior of the ADO.NET provider, except that the provider will load the DLL and do a version check if the DLL is already present and then attempt to delete it if the version is wrong. Previously the provider would attempt to delete the DLL first and, if not successful, load it and do a version check. In most situations, this should help improve performance. Note that if the provider DLL is in the global assembly cache (GAC), then no dbdata DLL will be found there. Typically, the provider DLL will be located with the application executable. Ultimately, your application will decide how the provider is loaded if not through the GAC. Placement of the dbdata DLL as described in step 1 is preferable to that of step 2. It will be the ADO.NET application developer’s responsibility to make a copy of the dbdata DLL during the development/test phase from the "temp" directory and embed it in one of the directories described in step 1 or 2. The developer must ensure correct bitness (32/64 bit) and version match (for example, 17.0.8.4103) between the provider and the dbdata DLL in order for steps 1 or 2 to work. ================(Build #2593 - Engineering Case #811732)================ In a multithreaded ADO.NET application, communicating with a slow-to-respond database server on one thread can impact the performance of threads that are communicating with quick-to-respond database servers. For example, if a database server requires 1 minute to respond to a connection request, then all other threads are delayed by 1 minute. The SQL Anywhere .NET Data Provider has been revised to remove this serialization. ================(Build #2412 - Engineering Case #804585)================ A pooled connection can be invalidated by the database server for a number of reasons including user creation, user deletion, password changes, connection timeout, etc. When the database server invalidates a pooled connection, the SQL Anywhere .NET Data Provider will discard the pooled connection and create a new connection. For multithreaded applications, it might have been given the same new connection to two different threads that were opening a connection. Eventually each thread had closed the connection, returning it to the pool, and an assertion had been generated by the server for the second thread (since the connection was already pooled). When this problem occurred, the database server returned the error "Assertion failed 104909 Invalid request on pooled connection" to the application. This problem has been fixed. ================(Build #2329 - Engineering Case #801280)================ A .NET application could have received a NullReferenceException when calling ClearAllPools or ClearPool in a multithreaded application. Also, a .NET application could have gone into an infinite loop if the database server was shut down while the .NET application was executing SQL statements. These problems have been fixed. ================(Build #2318 - Engineering Case #800698)================ Some of the SQL Anywhere .NET Data Provider Database (class) and DbProviderServices (class) methods may have failed if the underlying Table property was null. The methods that may have failed include Database.Exists, Database.Delete, Database.Create, Database.CreateIfNotExists, DbDatabaseExists, DbDeleteDatabase, DbCreateDatabase, and DbCreateDatabaseScript. These Database methods are used in Entity Framework applications. This problem has been fixed. The following example code fragment illustrates the use of some of these methods: using (var db = new BloggingContext()) { Console.WriteLine("Delete the old database"); db.Database.Delete(); } using (var db = new BloggingContext()) { Console.WriteLine("Create a new database"); db.Database.Create(); if (db.Database.Exists()) { Console.WriteLine("The database does exist"); } else { Console.WriteLine("The database does not exist"); } } ================(Build #2317 - Engineering Case #800621)================ If a .NET connection had been pooled and was currently closed, and the database server was terminated while the pooled connection was closed, then an attempt to open the pooled connection would have resulted in an infinite loop. This has now been fixed. Note, the problem was introduced by the changes for Engineering case 793308 - "Slow performance of ADO.NET connection pooling" ================(Build #2266 - Engineering Case #797203)================ When using the SQL Anywhere .NET provider, it was possible to get an exception when closing a pooled connection. The exception error text was “Invalid user ID or password”. This exception could have occurred for any condition where a connection was not returned to a pool, for example, when a password had changed. The complete list of conditions for which a connection is not pooled are described at http://dcx.sap.com/index.html#sqla170/en/html/814d6d5c6ce2101482c9b5abd7938330.html. This problem has been fixed. The .NET application will no longer see the exception, and the connection is closed but not pooled. ================(Build #2265 - Engineering Case #797124)================ When using the SQL Anywhere .NET provider, it was possible to get a NullReferenceException when calling ClearAllPools. This exception could have occurred in multithreaded .NET applications that open or close pooled connections while another thread calls ClearAllPools. This problem has been fixed. The .NET application will no longer see the exception. ================(Build #2222 - Engineering Case #793308)================ The performance of the ADO.NET connection pool was slow compared to the .NET ODBC bridge. Several changes have now been made to improve the performance. ================(Build #2218 - Engineering Case #793189)================ When attempting to call a stored procedure with many parameters with long names, an error could have been returned indicating that parameters were mismatched. For example, when attempting to call a stored procedure with 99 very long parameter names: myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_2", 10); myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_1", 5); myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_55", 550); myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_54", "string"); . . . myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_98", 980); myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_97", 970); myCommand.Parameters.AddWithValue("@param_with_a_very_long_name_99", 990); SADataReader myDataReader = myCommand.ExecuteReader(); the SQL Anywhere .NET provider should have matched parameter names with actual parameter names so order should not have mattered. The provider was not setting aside enough memory for the parameter name lookup, resulting in matching by order rather than name. This problem has been fixed. ================(Build #2185 - Engineering Case #785764)================ When using the .NET GetSchemaTable() method for a query on a table whose name was not unique, an exception could have occurred in the provider. This problem has been fixed. For example, suppose the following query was executed against the table “employees” owned by DBA, and there also exists a table “Employees” owned by the user GROUPO. SACommand cmd = new SACommand("SELECT * FROM DBA.employees", conn); SADataReader reader = cmd.ExecuteReader(); DataTable schema = reader.GetSchemaTable(); An exception was raised in the GetSchemaTable call. When the tables have the same letter case, then an exception would not have occurred but the wrong schema information could have been returned. ================(Build #2179 - Engineering Case #787963)================ The .NET Data Provider would have generated an exception when attempting to connect to a database server that had more than two digits in the minor version. For example, the provider would have generated System.ArgumentOutOfRangeException parsing the following version string: SAP IQ/16.0.101.1215/20034/P/sp10.01/… This problem has been fixed. The normalized version string that is returned by the ServerVersion property now has the following format: ##.##.###.#### ^ ^ ^ ^ | | | | | | | Build Number | | Minor Version | Major Version Release Version This new format is also used in the DataSourceInformation collection (DataSourceProductVersion and DataSourceProductVersionNormalized). ================(Build #2162 - Engineering Case #787422)================ The changes for Engineering case 766113 caused the .NET Data Provider to attempt to set the CHAINED option to ON when connecting to the utility database. This resulted in the error “Permission denied: you do not have permission to execute a statement of this type” when connecting to the utility database, due to this option being disallowed for the utility database. This problem has now been fixed. ================(Build #2090 - Engineering Case #776249)================ A .NET application could have crashed when calling the SACommand.Cancel method. This has been corrected. ================(Build #2064 - Engineering Case #776895)================ In the Entity Framework Data Model Wizard “Choose Your Database Objects and Settings” dialog, the SQL Anywhere .NET Data Provider did not return stored procedure and function names in the “Stored Procedures and Functions” list view for a case-sensitive database. This problem has been corrected. ================(Build #2042 - Engineering Case #773894)================ When an ADO.NET application enlisted with a transaction coordinator, a constant GUID was supplied by the SQL Anywhere ADO.NET Data Provider. This prevented additional .NET applications, running on the same system, from doing the same. The following error message may have been seen. "A resource manager with the same identifier is already registered with the specified transaction coordinator. (Exception from HRESULT: 0x8004D102)" This problem has now been fixed so that instead of a constant GUID, a new one is generated each time enlistment occurs. ================(Build #2020 - Engineering Case #771708)================ When using the SQL Anywhere .NET Data Provider, if the BulkCopyTimeout property was set to 0, an exception would have occurred during a call to WriteToServer. This has been fixed. The value 0 means that there is no timeout. ================(Build #2014 - Engineering Case #770760)================ Visual Studio 2012/2013 would have failed to generate a Entity Framework 6 data model using a SQL Anywhere database. This has been corrected. The steps for generating Entity Framework 6 data models (Entity Framework 6 Tools for Visual Studio 2012 & 2013 should be installed): - Run SetupVSPackage.exe with “/v 6” option to register the Entity Framework 6 Provider. - Install Entity Framework NuGet Package for the Visual Studio project. - Modify app.config file to add the Entity Framework 6 provider. Here’s an example: <providers> <provider invariantName="iAnywhere.Data.SQLAnywhere" type="iAnywhere.Data.SQLAnywhere.SAProviderServices, iAnywhere.Data.SQLAnywhere.EF6, Version=16.0.0.20144, Culture=neutral, PublicKeyToken=f222fc4333e0d400" /> - Build the Visual Studio project. - Run Entity Data Model Wizard. ================(Build #1987 - Engineering Case #768174)================ When using the SQL Anywhere .NET Data Provider, decimal values would have been displayed with trailing zeroes removed. For example, instead of 5.1000, it would have been displayed as 5.1. This was an unintentional change in behavior which has now been corrected. ================(Build #1965 - Engineering Case #766113)================ The ADO.NET provider would not have been able to roll back a transaction if CHAINED option was OFF. This has ben fixed by setting CHAINED option to ON after opening a database connection. ================(Build #1962 - Engineering Case #766511)================ In Visual Studio, incorrect major version information for the SQL Anywhere plugin may have been shown in the "Choose Data Source" and "Add Connection" dialogs on non-English systems. This has been corrected. ================(Build #1954 - Engineering Case #766115)================ Visual Studio 2013 integration was not supported. SetupVSPackage has now been modified to create registry keys for Visual Studio 2013. ================(Build #1946 - Engineering Case #765334)================ In a .NET application, a buffer overrun could have resulted when using an SADataReader to get the value of a very long column value. Example: SELECT SPACE(1147483643) FROM dummy The possible buffer overrun has been corrected. ================(Build #1944 - Engineering Case #765332)================ If the alias name in a SQL statement was longer than 128 characters, a SQL Anywhere .NET data provider client application could have crashed. Example: SELECT 'string' AS "alias...name" FROM dummy Similarily, if a SQL column expression contained more than 128 characters, a .NET client application could have crashed. Example: SELECT 1+2+3+...+1000 FROM dummy These problems have been fixed. Alias and expression names are now restricted to at most 128 characters by the SQL Anywhere .NET data provider. As a work-around for the first case, restrict the length of alias names to at most 128 characters. Example: SELECT 'string' AS expr FROM dummy As a work-around for the second case, use an alias name and restrict the length of its name to at most 128 characters. Example: SELECT 1+2+3+...+1000 AS expr FROM dummy ================(Build #1868 - Engineering Case #760873)================ Named parameter lookup performed poorly. The GetInputParameterValues method has now been rewritten to improve the speed of named parameter lookup. ================(Build #1857 - Engineering Case #759830)================ An application using the ADO.Net Manager Provider could have failed with the error “Unable to load DLL ‘dbdata16.dll’” when calling SAConnectionStringBuilder.ToString. Ths has now been corrected. ================(Build #1831 - Engineering Case #758073)================ The Visual Studio 2010 compiler could have crashed when generating Entity Data Models. This has now been fixed. ================(Build #1761 - Engineering Case #746767)================ In a .NET application, it was possible to store a decimal number into a NUMERIC/DECIMAL table column when the precision of the decimal number exceeded the precision of the NUMERIC/DECIMAL column by 1. Also, when the stated precision of a decimal value parameter was much less than the actual precision of a decimal value, it is possible to corrupt the heap. parm.Precision = 5; parm.Value = (decimal) 123456789; These problems have now been fixed. ================(Build #1755 - Engineering Case #751207)================ Using SACommandBuilder.DeriveParameters with a stored procedure or function that contained long data values as input or output parameters would have reported the size as 32767 bytes. This has been fixed and will now report a size of 0 bytes (to use the maximum size of the long data value during binding) instead. ================(Build #1745 - Engineering Case #751588)================ The ASP.NET provider's database configuration tool, SetupAspNet, would have failed with a syntax error message. This problem has been fixed. ================(Build #1737 - Engineering Case #750915)================ The SABulkCopy method would have thrown an exception when using SQL Server as the data source. This has now been corrected. ================(Build #1717 - Engineering Case #750008)================ In a multithreaded ADO.NET application, there was a possibility for process execution to hang, or for an exception to occur. Three problems were identified and corrected. ================(Build #1709 - Engineering Case #749295)================ The Dispose function of SATransaction did not automatically do a rollback. This nhas now been corrected. ================(Build #1683 - Engineering Case #747334)================ A client application could have hung when opening a pooled connection following a failed connection. This has now been fixed. ================(Build #1681 - Engineering Case #747308)================ The Entity Framework method Import would have failed to find procedures if the procedures had comments in front of ‘ALTER PROCEDURE’. This has now been corrected. ================(Build #1658 - Engineering Case #746082)================ When an application using ADO.NET had disconnected all of its connections so that they were pooled, an autostarted database, and/or server, could have autostopped, causing the server to need to be autostarted again if the same application connected again. This has been fixed so that in most cases, the presence of a pooled ADO.NET connection will prevent the server from autostopping. Note that in cases where pooled connections cannot be reused (for example, connections using integrated login, or the user's password was changed), the database and/or server may still autostop even with this fix. ================(Build #1612 - Engineering Case #743743)================ When an ADO.NET connection was closed and returned to the connection pool by the SQL Anywhere .NET Data Provider, the connection name was cleared. However, when the pooled connection was reclaimed from the pool, the ConnectionName was not restored. This problem has now been corrected. ================(Build #1607 - Engineering Case #739247)================ When a new SAConnectionStringBuilder object was created and used by each SAConnection object in a loop, performance was slow. Performance has now been improved. ================(Build #1605 - Engineering Case #743048)================ Opening of pooled connections was taking longer than necessary. Open performance of pooled connections has now been improved by caching and reusing some internal values. ================(Build #1602 - Engineering Case #742857)================ When reading Long Varchar or Long Binary columns using SADataReader, the results could have been truncated to 65535 chars. This has now been fixed. ================(Build #1594 - Engineering Case #742543)================ When iterating through the parameters of a SACommand using “foreach (SAParameter param in command.Parameters)”, the first iteration would have worked, but subsequent iterations would not have had the parameters. This has now been corrected. ================(Build #1588 - Engineering Case #741707)================ An access violation exception in the ADO.NET provider could have caused the database server to crash. This has been fixed. ================(Build #1581 - Engineering Case #741704)================ The changes for Engineering case 735654 were incomplete. Using a 11.0.1 database with a 12.0.1 or 16.0.0 .net provider and server, could still have resulted in the exception " Invalid option 'timestamp_with_time_zone_format' -- no PUBLIC setting exists". This has now been corrected. ================(Build #1580 - Engineering Case #741721)================ Calling the SAConnection.ConnectionString property could have caused the provider to crash with a NullReferenceException. This has now been fixed. ================(Build #1570 - Engineering Case #740808)================ Multithreading .NET application could have failed with an access violation exception. Fixed by modifying thread synchronization code for some interface functions, and code for managed connection pooling. ================(Build #1566 - Engineering Case #740695)================ The ADO.NET provider could have thrown an exception when closing a pooled connection which used an integrated login. This has now been fixed. ================(Build #1540 - Engineering Case #738415)================ Use of connection pooling could have caused web applications to hang. This has been corrected. ================(Build #1528 - Engineering Case #738381)================ Changes have been made to improve performance for connection pooling. ================(Build #1524 - Engineering Case #738144)================ In rare circumstances, calling SAConnection could have throw a NullReferenceException when the ConnectionString property was accessed. This has now been fixed. ================(Build #1523 - Engineering Case #738143)================ In rare circumstances, the ADO.NET Provider could have thrown an AccessViolationException when reading a DataSet. This has now been fixed. ================(Build #1508 - Engineering Case #737191)================ The MSSqlToSA.xml mapping file is used by the SQL Server Import and Export Data wizard (DTSWizard). This mapping file has now been improved in the following ways: - other SQL Server clients (SQLOLEDB;SQLNCLI;SQLNCLI10) are now included in the list of possible data sources, in addition to the existing SQL Server .NET provider. - "datetimeoffset" is now mapped to "timestamp with time zone" instead of "datetimeoffset", because the Microsoft DTSWizard would append "(0)" to "datetimeoffset" and that would cause a syntax error when the CREATE TABLE statement was executed against a SQL Anywhere / Sybase IQ server. For example, "CREATE TABLE Temp (dtocol datetimeoffset(0) )" is invalid syntax. - “float” is now mapped to “double”, instead of float. This causes the Microsoft DTSWizard to use "real" for small precision float types, and "double" for large precision float types. When “float” was used, Microsoft did not add the precision specification (for example, float_col float(53)). These improvements apply to the use of the SQL Anywhere .NET or the SQL Anywhere OLE DB providers, in combination with a number SQL Server providers, in the migration of tables from SQL Server to SQL Anywhere/Sybase IQ. ================(Build #1492 - Engineering Case #735923)================ Calling SAConnection.Open would have thrown an exception when attempting to open the 'utility_db'. ================(Build #1492 - Engineering Case #735807)================ Closing a pooled connection could have been blocked when the request was from a multi-threaded application. This has been fixed. ================(Build #1491 - Engineering Case #735815)================ Calling the SAConnection.Close method would have thrown an exception when closing pooled version 10.0 and version 11.0 database connections. ================(Build #1491 - Engineering Case #735654)================ The SAConnection.Open method would have thrown an exception when opening a version 10.0 or 11.0 database connection using the version 12.0 or 16.0 provider. This has now been corrected. ================(Build #1485 - Engineering Case #735130)================ Using the Entity Framework in an ASP.NET MVC application could have caused a NullRferenceException. The provider was not checking if the Type.FullName was null before calling the method Type.FullName.StartsWith. This has been corrected. ================(Build #1485 - Engineering Case #735124)================ Calling the Entity Framework function CurrentDateTimeOffset would have resulted in a 'procedure not found' server error. This has now been corrected. ================(Build #1443 - Engineering Case #731461)================ ADO.NET provider did not convert ‘timestamp with timezone’ values correctly when the regional date settings of the client did not match the date settings of the database. The provider will now return .NET DateTimeOffset values to the client. The client can then convert the .NET DateTimeOffset values to a desired format. ================(Build #1431 - Engineering Case #730642)================ If multiple threads attempted to access a connection pool concurrently (by modifying it to add/remove a connection), an InvalidOperationException would have been thrown. This has been corrected. ================(Build #1410 - Engineering Case #728589)================ Calling Entity Framework SaveChanges could have caused a NullReferenceException if the entity model had properties with “fixed” concurrency mode. This has now been fixed. ================(Build #1405 - Engineering Case #728335)================ When setting SAParameter.DbType to DbType.DateTime2, an IndexOutOfRangeException could have been thrown. The data type conversion was missing for DbType.DateTime2. This has now been corrected.

SQL Anywhere - DBLIB Client Library

================(Build #2627 - Engineering Case #801193)================ 32-bit client applications running on SPARC systems would have crashed when connecting to the server. This has been fixed. ================(Build #2266 - Engineering Case #796899)================ The Embedded SQL function sqlda_string_length would have returned inconsistent results for some types in certain situations. If the column in a query was described as DT_DATE, DT_TIME, DT_TIMESTAMP, DT_NSTRING, or DT_STRING, the length reported by this function is correct before fill_sqlda is called, but was incorrect after fill_sqlda was called. The following example illustrates the use of sqlda_string_length: for( col = 0; col < sqlda->sqld; col++ ) { sqlda->sqlvar[col].sqllen = sqlda_string_length( sqlda, col ) - 1; sqlda->sqlvar[col].sqltype = DT_STRING; } fill_sqlda( sqlda ); In the above example, if sqlda_string_length is called after the fill_sqlda call, the lengths returned are 1 greater than before. This problem has been fixed. The sqlda_string_length function will now account for the fact that the fill_sqlda function (or any of its variants) has been called. ================(Build #2266 - Engineering Case #796408)================ Execution of an Embedded SQL SET DESCRIPTOR statement would have failed to copy the last two bytes of data from a host variable of type VARCHAR or BINARY to the SQLDA variable data array. For example, consider the following code fragment: static DECL_VARCHAR(17) myvc; . . . myvc.len = 17; memmove( (char *)myvc.array, "12345678901234567", 17 ); EXEC SQL ALLOCATE DESCRIPTOR sqlda1 WITH MAX 10; EXEC SQL SET DESCRIPTOR sqlda1 COUNT = 1; length = 17; EXEC SQL SET DESCRIPTOR sqlda1 VALUE 1 TYPE = 448, LENGTH = :length; fill_sqlda( sqlda1 ); EXEC SQL SET DESCRIPTOR sqlda1 VALUE 1 DATA = :myvc; _check_condition( SQLCODE == 0 && strncmp( (char *)myvc.array, ((VARCHAR *)(sqlda1->sqlvar[0].sqldata))->array, 17 ) == 0 ); free_filled_sqlda( sqlda1 ); The array field ((sqlda1->sqlvar[0].sqldata))->array ) would have contained all but the last two characters of the myvc variable. This problem has been fixed. With the new version of DBLIB, any Embedded SQL applications that use DECL_VARCHAR and DECL_BINARY must be recompiled using the Embedded SQL preprocessor (sqlpp). The Embedded SQL GET DESCRIPTOR statement, which copies data from the SQLDA to the host variable, does so correctly. ================(Build #2240 - Engineering Case #795135)================ If a connection string contained a START= parameter which included an -ec or -xs option containing a path and filename with spaces, a parsing error could have been given even if the value was enclosed in quotes. For example: UID=…;PWD=…;DBF=mydatabase.db;START=dbeng17 -xs “https(identity=my spacey file.id;identity_password=test)” This has been fixed. ================(Build #2130 - Engineering Case #783936)================ If the CHARSET connection option was set to a character set other than the client computer’s OS character set, pieces of the connection_property(‘AppInfo’) value could have been garbled. This would only have been visible if the hostname, username, or SQL Anywhere installation directory contained non-ASCII characters. This has been fixed. ================(Build #2002 - Engineering Case #769615)================ Prefetch performance may have been slightly lower than it should have been. This has been fixed so that prefetch performance is better in some cases, depending on the operating system and the data being fetched. ================(Build #1645 - Engineering Case #745564)================ When using the LogFile connection parameter, the timestamp logged before each connection was truncated to only include the first digit of the seconds. This has been fixed. ================(Build #1598 - Engineering Case #742862)================ On Windows systems, if the database server address cache file (sasrv.ini) was not writable by the current user, repeated connection attempts to a non-cached server may have been slow. This has been fixed. ================(Build #1409 - Engineering Case #728789)================ If the SQLCONNECT environment variable was used to specify default connection values, and the length of the SQLCONNECT value was greater than or equal to 255 bytes, the SQLCONNECT value was ignored. This has been fixed so that SQLCONNECT values up to a length of 1023 bytes are accepted. ================(Build #1387 - Engineering Case #725206)================ A FETCH RELATIVE {offset}, where the offset was greater than 1, could have failed with a "Connection was terminated" error if the fetch was not the first fetch on the cursor, and prefetch was enabled for the fetch. For this to have ocurred, the rows between the last fetch and the requested rows row had to have values that were greater than 250 bytes. This has been fixed. As a workaround, prefetch can be disabled.

SQL Anywhere - Documentation

================(Build #1447 - Engineering Case #732338)================ Feature selection/de-selection switches for the setup.exe command-line are now as follows: Switch: Feature: SERVER64 SQL Anywhere Server (64-bit) CLIENT64 SQL Anywhere Client (64-bit) SERVER32 SQL Anywhere Server (32-bit) CLIENT32 SQL Anywhere Client (32-bit) MOBILE SQL Anywhere for Windows Mobile UL UltraLite ML64 MobiLink (64-bit) ML32 MobiLink (32-bit) SR64 SQL Remote (64-bit) SR32 SQL Remote (32-bit) AT64 Administration Tools (64-bit) AT32 Administration Tools (32-bit) SM64 SQL Anywhere Monitor (64-bit) SM32 SQL Anywhere Monitor (32-bit) RS64 Relay Server (64-bit) SAMPLES Samples FIPS FIPS-approved Strong Encryption CAC CAC Authentication HA High Availability IM In-Memory Mode SON Read-only scale-out Server and Client features are now separately selectable in both 64-bit and 32-bit installs. The following features have been removed: ECC Strong Encryption, QAnywhere, and Relay Server (32-bit). See also the Comments section of the following DocCommentXchange page: http://dcx.sybase.com/index.html#sa160/en/dbprogramming/using-silent-install-deploy.html*d5e50990

SQL Anywhere - JDBC Client Library

================(Build #2542 - Engineering Case #810460)================ The SQL Anywhere JDBC driver does not load when used with Java Development Kit 9 (JDK 9) Early-Access Builds. This problem has been fixed. ================(Build #2412 - Engineering Case #804664)================ When using any of the JDBC ResultSet class "get" methods (for example, getDouble, getFloat, getInt, etc.) on a NUMERIC or DECIMAL column with the SQL Anywhere JDBC driver (sajdbc4), a memory leak occurred. A workaround is to CAST the numeric/decimal column to a string in the corresponding SQL query. For example, CAST(total_value AS VARCHAR(16)). This problem has been fixed. ================(Build #2360 - Engineering Case #803052)================ When a JDBC application had called getTypeInfo() of the DatabaseMetaData class, some column names are incorrect returned. The PRECISION column of the result set were incorrectly named COLUMN_SIZE. The AUTO_INCREMENT column of the result set were incorrectly named AUTO_UNIQUE_VALUE. The following example would have failed. DatabaseMetaData meta = conn.getMetaData(); ResultSet typeinfo = meta.getTypeInfo(); while (typeinfo.next()) { System.out.printf("PRECISION=%s\n", typeinfo.getString("PRECISION")); System.out.printf("AUTO_INCREMENT=%s\n", typeinfo.getString("AUTO_INCREMENT")); } This has been fixed. ================(Build #2359 - Engineering Case #802981)================ When a JDBC application had called getColumns or getProcedureColumns of the DatabaseMetaData class, some returned metadata information were incorrect: DatabaseMetaData meta = conn.getMetaData(); ResultSet columns = meta.getColumns(null, null, "AllTypes", null); - COLUMN_SIZE for numeric types is the precision, or number of digits, that can be represented. It does not include the sign. The COLUMN_SIZE reported for BIGINT, UNSIGNED BIGINT, UNSIGNED INT, and UNSIGNED SMALLINT were incorrect. This have been corrected from byte length to numeric precision. COLUMN_SIZE for INTEGER, TINYINT, and SMALLINT is unchanged. - DECIMAL_DIGITS for all exact numeric types other than SQL_DECIMAL and SQL_NUMERIC is 0. The DECIMAL_DIGITS reported for BIGINT, UNSIGNED BIGINT, UNSIGNED INT, and UNSIGNED SMALLINT were NULL. This has been corrected to 0. - DECIMAL_DIGITS for SQL_TYPE_TIME and SQL_TYPE_TIMESTAMP is the number digits in the fractional seconds component. The DECIMAL_DIGITS reported for TIME and TIMESTAMP WITH TIME ZONE were NULL. This has been corrected to 6. - CHAR_OCTET_LENGTH is the maximum length in bytes of a character or binary data type column. The CHAR_OCTET_LENGTH for TIMESTAMP WITH TIME ZONE were NULL. This has been corrected to 33. ================(Build #2269 - Engineering Case #797242)================ If the JDBC setMaxFieldSize was used to truncate the length of a binary column transmitted from the database server to the client, a crash may have occurred in the JDBC application. The JDBC setMaxFieldSize(int max) function sets the limit for the maximum number of bytes that can be returned for character and binary column values in a ResultSet object produced by this Statement object. For example, if the binary column length is 300,000 and max is 256, then a crash may have occurred in a getBytes call for that column. The following is an example of a query that can produce binary column values with length 300,000. select cast(repeat( '0123456789', 30000 ) as long binary) from sa_rowgenerator(1,4) This problem also affected the Interactive SQL utilty (dbisql) when fetching BINARY columns. The problem has been fixed. ================(Build #2130 - Engineering Case #784055)================ A JDBC application could have found that fetching result sets with long varchar, long nvarchar or long binary columns took much longer with a scrollable cursor (i.e. an insensitive or sensitive statement) when compared to a non-scrollable cursor (i.e. a forward only statement). This difference in performance was most noticeable if most of the long values were smaller than 256K. The performance issue has now been fixed and scrollable cursors now perform as well as non-scrollable cursors. ================(Build #2023 - Engineering Case #772022)================ If an application fetched a result set that contained long varchar, long binary or long nvarchar columns, then the SQL Anywhere JDBC driver would have fetched the result set one row at a time in order to ensure the full column value of the long columns could be retrieved. For result sets that do not contain long columns, the SQL Anywhere JDBC Driver fetched multiple rows at a time instead. Applications can use the Statement.setMaxFieldSize() method to attempt to limit the amount of data the JDBC Driver retrieves, however calling this method did not make the JDBC driver fetch multiple rows at a time. The SQL Anywhere JDBC driver will now fetch multiple rows at a time for result sets that contain long columns if Statement.setMaxFieldSize() is called and the value passed in to setMaxFieldSize() is less than or equal to 32K. The behaviour for result sets that do not contain long columns remains the same and the JDBC driver will continue to fetch multiple rows at a time for these result sets. ================(Build #1992 - Engineering Case #768718)================ When using the JDBC driver in a multithreaded Java application on Windows, a crash may have occurred in the heap management run-time code (HeapFree). This problem also appeared when using the MobiLink Profiler, since it uses the JDBC driver. This problem has been fixed. ================(Build #1684 - Engineering Case #747898)================ When using the SQL Anywhere JDBC driver, if two or more addBatch calls are followed by an executeBatch and then the application used executeUpdate in non-batched mode, the application would have crashed. This problem has been fixed. A work-around is to use addBatch/executeBatch for all executions of the prepared statement once addBatch/executeBatch has been used. ================(Build #1469 - Engineering Case #733726)================ If an application using the SQL Anywhere JDBC Driver failed to explicitly close all open connections before attempting to exit, then there was a chance the Java VM would have crashed. This was most noticeable on Unix platforms. This problem has now been fixed. ================(Build #1453 - Engineering Case #732853)================ Attempting to create a Mobilink project using Sybase Central on a 64-bit platform could in some cases have caused Sybase Central to crash. This problem was most noticeable on Solaris and Mac platforms. The problem has now been fixed. ================(Build #1430 - Engineering Case #722245)================ Calling PreparedStatement.setNull() and passing in a SQL type of java.sql.Types.NULL, would have incorrectly returned a “bad datatype” error. This problem has now been fixed and java.sql.Types.NULL is now allowed in setNull() calls.

SQL Anywhere - ODBC Client Library

================(Build #2702 - Engineering Case #815389)================ For SQLGetDescField(SQL_DESC_UNNAMED), the SQL Anywhere ODBC driver always returns SQL_UNNAMED. For columns and parameters that have names, the ODBC driver should return SQL_NAMED. The following is a sample code sequence for parameter 2 of a prepared CALL statement. SQLULEN named = 0; SQLGetStmtAttr( hstmt, SQL_ATTR_IMP_PARAM_DESC, &hdesc, 0, NULL ); SQLGetDescField( hdesc, 2, SQL_DESC_UNNAMED, (SQLPOINTER) &named, 0, NULL ); This problem has been fixed. ================(Build #2651 - Engineering Case #813650)================ If an error occurs when inserting a batch of rows with the SQL Anywhere ODBC driver (a wide insert), then the driver drops into single row insert mode. If this results in all rows being inserted correctly, then the ODBC driver should return SQL_SUCCESS, not SQL_ERROR. This problem has been fixed. The ODBC driver will return SQL_SUCCESS if all rows are inserted without error and SQL_ERROR if one or more rows fail insertion. Note that returning SQL_ERROR deviates from the ODBC standard which requires that SQL_SUCCESS_WITH_INFO be returned if some rows are successfully inserted. ================(Build #2570 - Engineering Case #811134)================ The SQL Anywhere ODBC driver returns 8 for the column length of a SQL_BIGINT data type for the following ODBC API procedures and the indicated parameter value: SQLColAttributes(SQL_COLUMN_LENGTH) SQLColAttribute(SQL_COLUMN_LENGTH) SQLGetDescField(SQL_COLUMN_LENGTH) The number 8 represents the number of bytes required for the default binding of this data type as binary. This behavior conforms to current Microsoft ODBC drivers. The older ODBC 2.0 specification called for a default binding of SQL_C_CHAR and a column length of 20. This requirement likely originated in the days when 64-bit integer values were not supported natively by the computer processors of the day. Visual Basic 6 Remote Data Objects (RDO) modules expect this behavior. Visual Basic 6 was introduced around 1998 and support for it was dropped by Microsoft in 2008. Version 12 and earlier SQL Anywhere ODBC drivers return a length of 19 in this situation, but this was changed in version 16 to favor conformance with Microsoft ODBC drivers. This problem has been addressed. In order to resume support for VB applications using RDO, the undocumented connection parameter VBRDO can now be used to cause the SQL Anywhere ODBC driver to revert to the ODBC 2.0 specification’s stipulation that 20 be returned for the column length of SQL_BIGINT. You must include VBRDO=Yes (or VBRDO=True, VBRDO=1, VBRDO=On) in the application's connection parameters in order to obtain the old behavior that containers like Microsoft Remote Data Control (MSRDC) require. Example: DSN=Test17;VBRDO=Yes When using the Microsoft ODBC Data Source Administrator to create or modify a data source, the VBRDO parameter can be set manually on the Advanced tab. It can also be set or queried using the SQL Anywhere dbdsn utility. Choosing VBRDO=Yes causes SQLColAttributes(SQL_COLUMN_LENGTH), SQLColAttribute(SQL_COLUMN_LENGTH), and SQLGetDescField(SQL_COLUMN_LENGTH) to return 20 instead of 8. Omitting the parameter or choosing VBRDO=No preserves the current behavior. The use of VBRDO=Yes also causes SQLGetInfo(SQL_CATALOG_NAME_SEPARATOR) to return "." Instead of "", and SQLGetInfo(SQL_CATALOG_LOCATION) to return SQL_CL_START instead of 0. This behavior is not new to the driver and was required in the past to support some RDO functionality. Note that the ODBC driver does not support catalogs since a client connection is made to the database, not the server. ================(Build #2471 - Engineering Case #807627)================ The SQL Anywhere ODBC driver updates the parameter 1 indicator value after executing a parameterized statement, for example, an INSERT statement, when using SQLExecute/SQLExecDirect. This could result in memory corruption, especially if row-wise parameter binding was used. The driver should not alter any parameter indicator values. This problem has been fixed. The ODBC driver could incorrectly update a parameter status array element with SQL_PARAM_SUCCESS_WITH_INFO even though there was no corresponding diagnostic record.If the operation is successful then the array element must be set to SQL_PARAM_SUCCESS. This problem has been fixed. ================(Build #2389 - Engineering Case #803871)================ The following corrections have been made to the SQL Anywhere ODBC driver. - SQLDescribeCol(ColumnSize) for TIME was 6 and is now 15, for TIMESTAMP/DATETIME/SMALLDATETIME was 6 and is now 26. - SQLDescribeCol(DecimalDigits) for TIME was 0 and is now 6. - The ODBC 2.0 SQLColAttributes(SQL_COLUMN_LENGTH) function returned a display size for all types, and now returns the octet length for all types. - The ODBC 2.0 SQLColAttributes(SQL_COLUMN_PRECISION) function result for REAL was 24 and is now 7, for DOUBLE was 53 and is now 15, for TIME was 6 and is now 15, for TIMESTAMP/DATETIME/SMALLDATETIME was 6 and is now 26, for TEXT/IMAGE was 0 and is now 2147483647. - SQLColAttribute(SQL_DESC_DISPLAY_SIZE) for BIT was 2 and is now 1. - SQLColAttribute(SQL_DESC_LENGTH) for NUMERIC(X,5) was X+2 and is now X. - SQLColAttribute(SQL_DESC_PRECISION) for DATE was 10 and is now 0 (the numbers of digits in the fractional seconds component for the SQL_TYPE_TIME, SQL_TYPE_TIMESTAMP, or SQL_INTERVAL_SECOND data type). - SQLColAttribute(SQL_DESC_SCALE) for TIME was 0 and is now 6. The matches the Microsoft ODBC driver, however, the field value is undefined for this data type. - The SQL Anywhere ODBC driver now ensures that the ColumnSize value returned by SQLDescribeCol() matches the value returned by SQLColAttribute(SQL_DESC_LENGTH). - The SQL Anywhere ODBC driver now ensures that the DecimalDigits value returned by SQLDescribeCol() matches the value returned by SQLColAttribute(SQL_DESC_SCALE). ================(Build #2361 - Engineering Case #803007)================ When an ODBC application had called SQLColAttribute, SQLColumns, SQLProcedureColumns, or SQLGetTypeInfo, some returned metadata information were incorrect: - FLOAT, REAL, and DOUBLE are approximate numeric data types so the SQL_DESC_NUM_PREC_RADIX is 2 and the SQL_DESC_PRECISION field must contain the number of bits. For FLOAT, REAL, and DOUBLE columns, SQLGetTypeInfo returns a NUM_PREC_RADIX of 2. The reported COLUMN_SIZE were 15, 7, and 15 respectively which represents base 10 precision. The COLUMN_SIZE has been corrected to 53, 24, and 53 respectively which represents base 2 precision. - For TIME columns, SQLGetTypeInfo must return a COLUMN_SIZE equal to 9 + s (the number of characters in the hh:mm:ss[.fff...] format, where s is the seconds precision). For SQL Anywhere, s is 6. For TIME columns, SQLGetTypeInfo reported the COLUMN_SIZE as 8. The COLUMN_SIZE has been corrected to 15. Corresponding corrections have been made to SQLColumns, and SQLProcedureColumns: - SQLColAttribute(SQL_DESC_PRECISION) must return the precision in bits when SQL_DESC_NUM_PREC_RADIX is 2. For REAL columns, 7 were reported. This has been corrected to 24. For FLOAT and DOUBLE columns, 15 were reported. This has been corrected to 53. - SQLColAttribute(SQL_DESC_LENGTH) must return the PRECISION descriptor field for all numeric types. For TIME, it must return 15 (9+6 fractional seconds). For REAL columns, 7 were reported. This has been corrected to 24. For FLOAT and DOUBLE columns, 15 were reported. This has been corrected to 53. For TIME columns, 8 were reported. This has been corrected to 15. - SQLColAttribute(SQL_DESC_DISPLAY_SIZE) must return the maximum number of characters required to display data from the column. For REAL, it is 14. For FLOAT and DOUBLE, it is 24. For TIME, it is 9 + s (a time in the format hh:mm:ss[.fff...], where s is the fractional seconds precision). For SQL Anywhere s=6. For REAL columns, 13 were reported. This has been corrected to 14 (for example, -3.40282347e+38). For FLOAT and DOUBLE columns, 22 were reported. This has been corrected to 24 (for example, -1.7976931348623150e+308). For TIME columns, 8 were reported. This has been corrected to 15 (for example, 23:59:59.999999). These corrections also appear for corresponding metadata methods in the SQL Anywhere JDBC driver. ================(Build #2359 - Engineering Case #802980)================ When an ODBC application had called SQLColumns or SQLProcedureColumns, some returned metadata information were incorrect: - COLUMN_SIZE for numeric types is the precision, or number of digits, that can be represented. The COLUMN_SIZE reported for BIGINT, UNSIGNED BIGINT, UNSIGNED INT, and UNSIGNED SMALLINT were incorrect. This have been corrected from byte length to numeric precision. COLUMN_SIZE for INTEGER, TINYINT, and SMALLINT is unchanged. - DECIMAL_DIGITS for all exact numeric types other than SQL_DECIMAL and SQL_NUMERIC is 0. The DECIMAL_DIGITS for BIGINT, UNSIGNED BIGINT, UNSIGNED INT, and UNSIGNED SMALLINT were NULL. This has been corrected to 0. - DECIMAL_DIGITS for SQL_TYPE_TIME and SQL_TYPE_TIMESTAMP is the number digits in the fractional seconds component. The DECIMAL_DIGITS for TIME and TIMESTAMP WITH TIME ZONE were NULL. This has been corrected to 6. - CHAR_OCTET_LENGTH is the maximum length in bytes of a character or binary data type column. The CHAR_OCTET_LENGTH for TIMESTAMP WITH TIME ZONE were NULL. This has been corrected to 33. ================(Build #2339 - Engineering Case #802217)================ When an ODBC application calls SQLGetInfo with the SQL_IDENTIFIER_QUOTE_CHAR option, the SQL Anywhere ODBC driver returns the single character SPACE as a string (" ") when the database option quoted_identifier has been set OFF. If the database contains identifiers with spaces (for example, a table named “My Appointments”), then the name must be quoted using double quotation marks ("), back ticks (`), or brackets ([]). However, when quoted_identifier has been set OFF, then one of the latter two quoting mechanisms must be used for “spacey” identifiers since "abc" is equivalent to 'abc' in this mode. The following example shows an acceptable way to quote a spacey table name, when quoted_identifier has been set OFF: SELECT * FROM [My Appointments]; If you use an ODBC-based application that generates SQL (for example, Crystal Reports), and quoted_identifier has been set OFF (perhaps inadvertently), the generator might create an invalid SQL statement such as the following since the “quote” character was reported to be a space character. SELECT * FROM My Appointments; This problem has been fixed. The ODBC driver will now return the back tick character as a string ("`") for version 12 or later databases when quoted_identifier has been set OFF. This means that the SQL generator might build the following query, provided it uses SQLGetInfo( SQL_IDENTIFIER_QUOTE_CHAR ) to obtain the quoting character. SELECT * FROM `My Appointments`; Also, when SQL_ATTR_METADATA_ID has been set TRUE, catalog functions now accept the quoting of identifiers as parameters using back ticks. Catalog functions include SQLTables(), SQLColumns(), SQLTablePrivileges(), and so on. Previously, only double quotes and brackets were supported. ================(Build #2304 - Engineering Case #799787)================ The 17.0 SQL Anywhere database server now supports server-side autocommit. In general, the use of server-side autocommit improves the performance of applications. However, there are some 3rd-party frameworks, like Hibernate, that wrap SQL statement execution in (using JDBC as an example) setAutoCommit calls. This is equivalent to the following sample JDBC code sequence. while( iterations-- > 0 ) { conn.setAutoCommit( true ); stmt.execute( sql_statement ); conn.setAutoCommit( false ); } When connected to a 17.0 database server, such a construct results in suboptimal performance because each call to the JDBC setAutoCommit method sends a “SET TEMPORARY OPTION auto_commit=’ON’ (or ‘OFF’) to the database server for execution. This problem has been fixed. A new connection parameter, ClientAutocommit=yes, can be used to cause the client JDBC- or ODBC-based application to revert to client-side autocommit behavior. Setting ClientAutocommit=no corresponds to the default behavior. Note that the ClientAutocommit connection parameter can be used with version 17.0, 16.0, or 12.0.1 ODBC drivers but it has no effect if the database server does not support server-side commits (e.g., 16.0 or 12.0.1 servers). Of course, a work-around for better performance would be to move the setAutoCommit calls outside the loop. But in some 3rd-part frameworks, this might not be possible. conn.setAutoCommit( true ); while( iterations-- > 0 ) { stmt.execute( sql_statement ); } conn.setAutoCommit( false ); On Windows, the Advanced tab of the ODBC Configuration for SQL Anywhere dialog (using the ODBC Data Source Administrator) has been updated to include this new connection parameter. ================(Build #2304 - Engineering Case #799779)================ The default AUTOCOMMIT behavior for the SQL Anywhere ODBC driver is SQL_AUTOCOMMIT_ON. Changing the AUTOCOMMIT setting to SQL_AUTOCOMMIT_OFF before connecting to the data source would have caused the driver to override this setting when connecting to a database server that supports server-side autocommit (as version 17 servers do). The following is a sample ODBC code sequence where this problem occurs. rc = SQLSetConnectAttr( hdbc, SQL_ATTR_AUTOCOMMIT, (SQLPOINTER)SQL_AUTOCOMMIT_OFF, 0 ); rc = SQLDriverConnect( hdbc, (SQLHWND)NULL, ds, SQL_NTS, scso, sizeof(scso)-1, &cbso, SQL_DRIVER_NOPROMPT ); This problem has now been fixed. A work-around is to interchange the order of the SQLSetConnectAttr and the SQLDriverConnect calls. ================(Build #2277 - Engineering Case #798010)================ When using the SQL Anywhere ODBC driver, a SQLNativeSql call would have returned an error if the output buffer pointer (OutStatementText) was NULL, or if the buffer length (BufferLength) was long enough to result in a 16-bit arithmetic overflow when calculating the buffer size required for conversion of wide character strings to multi-byte character sets including UTF-8. These problems have been fixed. ================(Build #2277 - Engineering Case #798010)================ When using the SQL Anywhere ODBC driver, a SQLNativeSql call would have returned an error if the output buffer pointer (OutStatementText) was NULL, or if the buffer length (BufferLength) was long enough to result in a 16-bit arithmetic overflow when calculating the buffer size required for conversion of wide character strings to multi-byte character sets including UTF-8. These problems have been fixed. ================(Build #2260 - Engineering Case #796700)================ If a column that is longer than the SQL_ATTR_MAX_LENGTH value (default 256K) was bound as SQL_C_BINARY and a multi-row fetch was performed, then the ODBC driver would have crashed. For example, if the column in the following query was bound as SQL_C_BINARY and the row array size was 4, then the ODBC driver would have crashed when attempting to fetch the rowset, provided that the SQL_ATTR_MAX_LENGTH value was less than 300,000. select cast(repeat( '0123456789', 30000 ) as long varchar) from sa_rowgenerator(1,4) This problem has been fixed. Note, this problem also affects the Interactive SQL utilty (dbisql) when fetching BINARY columns. ================(Build #2253 - Engineering Case #796090)================ Using the SQL Anywhere ODBC driver, calling SQLGetTypeInfo() would have returned the following information in the result set when connected to an SAP IQ database server: TYPE_NAME=table DATA_TYPE=SQL_VARCHAR COLUMN_SIZE=32767 LP= LS= CREATE_PARAMS= NULLABLE=1 TYPE_ORDINAL=1 The "table" type is not a suitable SQL_VARCHAR data type declarative and is not equivalent to the "char" data type. This row should not appear in the result set. Using the SQL Anywhere JDBC driver, the DatabaseMetaData.getTypeInfo() call will also include "table" in the result set when connected to an SAP IQ database server. These problems have been fixed. ================(Build #2248 - Engineering Case #795701)================ If the high-order byte in the val field of a SQL_NUMERIC_STRUCT was non-zero, then the SQL Anywhere ODBC driver may not have converted the numeric value correctly before sending it to the database server. The column value must be bound as a SQL_NUMERIC type and be sufficiently large enough in order for this to have occurred. For example, the representation of 31415926535897932384626433832795028.8419 in a SQL_NUMERIC_STRUCT is such that the high-order byte of the val field is 0xec. An incorrect value would would have been stored in the table column. This problem has now been fixed. ================(Build #2199 - Engineering Case #791481)================ When using the SQL Anywhere ODBC driver, the character size, display size, and octet length information returned by the ODBC functions SQLDescribeCol and SQLColAttribute were wrong for CHAR(x CHAR) or VARCHAR(x CHAR) columns when connected to a multi-byte character set (MBCS) database using the “wide” interface API (UNICODE mode). Given a table with the following columns. c_nchar nchar(42), c_charchar char(42 char), c_char char(126) The c_charchar column will hold at most 42 national characters. For example, a 932JPN database column holds 42 Japanese double-byte characters which requires at most 84 bytes of memory to store. A UTF-8 database column holds 42 Japanese double-byte characters which requires at most 168 bytes of memory to store (4*42=168 is the worst-case scenario for UTF-8 surrogate code points). For the c_charchar column, character size and display size should be 42. Character size is the number of characters, not the number of bytes. For the c_charchar column, the octet length is the maximum number of bytes required to store these characters in memory on the client (e.g., number of characters * 2 for double-byte, number of characters * 4 for UTF-8). For a DBCS database like 932JPN, the ODBC driver reported 84 for the character size, 84 for the display size, and 84 for the octet length. The character size and display size were incorrect. There was no problem when the ODBC application was compiled for and run in ANSI mode (for example, when using SQLDriverConnectA rather than SQLDriverConnectW). This problem has now been fixed. For each of the columns noted above, the following is now reported. Column 1: SQLDescribeCol: column name = c_nchar SQLDescribeCol: data type = SQL_WCHAR SQLDescribeCol: character size = 42 SQLColAttribute(SQL_DESC_DISPLAY_SIZE): character size = 42 SQLColAttribute(SQL_DESC_LENGTH): character size = 42 SQLColAttribute(SQL_DESC_OCTET_LENGTH): byte size = 168 Column 2: SQLDescribeCol: column name = c_charchar SQLDescribeCol: data type = SQL_CHAR SQLDescribeCol: character size = 42 SQLColAttribute(SQL_DESC_DISPLAY_SIZE): character size = 42 SQLColAttribute(SQL_DESC_LENGTH): character size = 42 SQLColAttribute(SQL_DESC_OCTET_LENGTH): byte size = 84 Column 3: SQLDescribeCol: column name = c_char SQLDescribeCol: data type = SQL_CHAR SQLDescribeCol: character size = 126 SQLColAttribute(SQL_DESC_DISPLAY_SIZE): character size = 126 SQLColAttribute(SQL_DESC_LENGTH): character size = 126 SQLColAttribute(SQL_DESC_OCTET_LENGTH): byte size = 126 ================(Build #2192 - Engineering Case #790651)================ When using the version 12 or 16 ODBC driver, any query that began with the prefix “insert" was incorrectly categorized as an INSERT statement. Beginning version 17, any query that began with the prefix “insert", “update", “delete", or “merge" was incorrectly categorized as an INSERT, UPDATE, DELETE, or MERGE statement. This problem has been fixed. Note that the comparison was case-insensitive (insert, Insert, INSERT, etc. all match). For example, if the query “updateInventory( 100 )”s executed, the ODBC driver would have assumed this was an UPDATE statement. ================(Build #2171 - Engineering Case #787903)================ If the StartLine (START) connection parameter contained the string “-n” anywhere in the text, it is interpreted as if the -n option was specified. This could have affected the final server name that was chosen. For example: dbisql -c "UID=DBA;PWD=sql;START=dbeng16.exe -z -o c:\y-n\output.log;Server=SRV1; DBN=DBN1;DBF=demo.db" This problem has been corrected. ================(Build #2168 - Engineering Case #788053)================ If a User Data Source Name (DSN) was created with the same name as a System Data Source Name, the original System Data Source could not have been examined or modified using the ODBC Configuration for SQL Anywhere window of the Windows ODBC Data Source Administrator. Furthermore, an attempt to modify the System DSN would have always resulted in a modified version of the User DSN being written over the System DSN. This problem has been fixed. As a work-around, the dbdsn/iqdsn tool can be used to create/modify user and system data sources. ================(Build #2124 - Engineering Case #783369)================ On Unix systems, if an ODBC connection string contained a parameter with an empty value followed by a DSN parameter (for example “dbn=;dsn=mydsn”), the DSN would not be read and the connection would have failed. This has been fixed. ================(Build #2087 - Engineering Case #779772)================ When a user-defined table had the same name as a system table (for example, SYSINDEX, SYSPROCPARM etc), one or more of the following ODBC functions may have failed: SQLForeignKeys SQLProcedures SQLSpecialColumns SQLStatistics SQLTablePrivileges SQLTables This problem has been fixed. ================(Build #2068 - Engineering Case #777061)================ Fetch performance on cursors for which prefetch was enabled may have been a bit poorer that it should have been when the cursor was using near the prefetch memory limit (default of 512K per connection). This slowdown was more likely to have occured when using wide fetch (also called array fetches). This has been fixed so that the performance in this case is now improved. ================(Build #2007 - Engineering Case #769156)================ Given a simple stored procedure such as the following: create procedure sp_test( in @a integer, in @b integer ) begin select @a + @b as c; end Binding two host variables and executing “SELECT * FROM sp_test(?, ?)" just after starting the database server, then the error "Not enough values for host variables" might have resulted. Subsequent execution attempts would have succeeded. A similar problem would have occurred when "SELECT * FROM sp_test(?, ?)" was executed immediately after (dropping and then) creating the stored procedure. This has now been fixed. ================(Build #1967 - Engineering Case #766132)================ If the ODBC driver or other client interfaces ran out of memory they could have crashed. This has been fixed. ================(Build #1915 - Engineering Case #763867)================ The SQLErrorW, SQLDataSourcesW, and SQLGetDescRecW functions were incorrectly returning a byte count, rather than a character count. For example, the column name "ABC" has a character count of 3, but the wide character (Unicode) string "ABC" occupies 6 bytes. The byte count (6 in this example) was returned. This has been corrected. This problem could have also impacted the values returned by SQLError, SQLDataSources, and SQLGetDescRec if the driver manager called their "wide" equivalents. This problem is also corrected. In addition, SQLGetDescRec could return a random field name for a descriptor when there was no field name (for example, a bookmark column has no field name). There was also the extremely rare possibility of a segment violation fault. These problems have been corrected. ================(Build #1915 - Engineering Case #763866)================ The odbc.h header file that was used to compile ODBC applications could have provided an inconsistent definition of HWND and SQLHWND for 64-bit Windows and other 64-bit platforms. For 64-bit compilers, the type might resolve to a 32-bit integer which is incorrect. Window handles, like other handles, should always be 64-bit pointer-type objects for a 64-bit executable. This problem has been fixed. ================(Build #1846 - Engineering Case #760362)================ If the SQL Anywhere ODBC driver, or the Sybase IQ ODBC driver version 15.x or later, was used to connect to a database with the database option 'quoted_identifier' set to 'off', or to a database on a 9.0.2 or earlier server, the ODBC driver would have failed to establish some properties of the DBMS. When quoted_identifier was 'off': 1. For a Sybase IQ DBMS, the driver would have reported [SQL Anywhere] in messages rather than [Sybase IQ]. 2. For a Sybase IQ DBMS, the driver would have reported "SQL Anywhere" instead of "Sybase IQ" for SQLGetInfo(SQL_DBMS_NAME). 3. For a Sybase IQ DBMS, the driver would not have used the "SYS.SYSIQVINDEX" table for SQLStatistics, but would have used "SYS.SYSINDEX" instead. 4. For a Sybase IQ DBMS, the ODBC driver will report the wrong server version number (e.g., 12.0.1 rather than 15.4) for SQLGetInfo(SQL_DBMS_VER). In addition, when quoted_identifier was 'off' or the server was version 9.0.2 or earlier: 5. The ODBC driver would not have known the correct CHARSET setting. 6. The ODBC driver may have had the wrong setting for the case sensitivity of the database and may have affected SQLGetTypeInfo and other schema query functions. 7. The ODBC driver may have had the wrong setting for the odbc_distinguish_char_and_varchar option. 8. The ODBC driver may have had the wrong setting for the odbc_describe_binary_as_varbinary option. Other than these issues, there are no other known side-effects. This problem has been fixed. ================(Build #1825 - Engineering Case #757516)================ Calling the ODBC function SQLBulkOperations() with SQL_ADD could have failed to insert rows without returning an error if the number of rows multiplied by the number of columns was more than 65535. This has been fixed. ================(Build #1782 - Engineering Case #754304)================ In rare, timing dependent cases, a multi-threaded client application could have incorrectly received the error "Parse Error: DSN '<name>' does not exist", or possibly other connection errors. In order for this to have occurred, the process needed to be making concurrent connections and needed to use both a User DSN and a System DSN. This has been fixed. ================(Build #1598 - Engineering Case #742733)================ When calling a procedure without a RESULT clause using ODBC and JDBC, the performance was not as fast as it could have been. This has been fixed so that the performance has been improved. ================(Build #1567 - Engineering Case #740842)================ When using the ODBC Data Source Administrator to configure a SQL Anywhere 11 ODBC data source, the Database File “Browse” button would have returned a truncated string. Only the first 7, or 3 characters, of the file path are returned (depending on bitness). This problem has now been fixed. ================(Build #1497 - Engineering Case #736249)================ Numeric and Decimal columns and parameters are transferred to and from the client in packed-decimal format when the column or parameter is bound as SQL_NUMERIC (SQL_C_NUMERIC). APIs like ODBC, OLE DB, and JDBC must convert between the packed-decimal number and a 128-bit binary number for these cases. Improvements have been made to the conversion routines for Windows 64-bit platforms. Conversions from packed-decimal to 128-bit binary for numbers in the range 10^20 to 10^38 are now approximately 67 times faster. Conversions from 128-bit binary to packed-decimal for numbers in the range 10^20 to 10^38 are now approximately 2.7 times faster. ================(Build #1471 - Engineering Case #733923)================ The Additional Connection Parameters on the Advanced page of the ODBC configuration dialog is used to specify rarely used connection parameters that do not appear on other pages of the wizard. The problem was that once a parameter was added in this page, it could not have been removed again. The value of the parameter could have been modified, but could only be deleted by editing it directly in the registry or by recreating the datasource. This problem has been fixed. ================(Build #1439 - Engineering Case #731823)================ Calling the ODBC function SQLGetInfo to retrieve the version of the ODBC driver (i.e. SQLGetInfo( dbc, SQL_DRIVER_VER, … ) would have returned a string that did not include the build number of the driver. This has been corrected so that the string now contains the build number. For version 12.0.1, the string returned was “12.00.0001”. As of this change, the value returned is “12.01.xxxx” where xxxx is the build number.

SQL Anywhere - OData Server

================(Build #2742 - Engineering Case #816812)================ The OData Server has been upgraded to use Jetty 9.2.26. ================(Build #2724 - Engineering Case #816289)================ In some circumstances, the OData Producer would record a NullException while closing a connection. This would result in an internal server error being reported to the client, even though the operation completed. The exception was introduced by CR 813769 (17.0.9.4786, 17.0.8.4154, 16.0.0.2654) when very low values are used for the ConnectionAuthExpiry option. This has been fixed. ================(Build #2583 - Engineering Case #811527)================ The OData Server has been upgraded to use Jetty 9.2.22. This has been fixed. ================(Build #2559 - Engineering Case #810820)================ An OData service, under heavy load, may have produced many log messages concerning java.lang.NullPointerException in a TreeSet used by the ConnectionPool. This has been fixed. ================(Build #2260 - Engineering Case #796644)================ Any update requests (bind) of a principal entity which modified a navigational property (from principal role) to a dependent entity would have ignored the changes to that navigational property. Navigational properties that modified from dependent role to a different principal entity where not ignored. This has been fixed. ================(Build #2260 - Engineering Case #796643)================ Attempting to do an insert or update of an entity with a link where one of the ends had multiplicity 0..1 could be rejected as a constraint violation. This happened when the entity being linked to was already linked to by another entity. The existing link must be removed to preserve the multiplicity. This has been fixed. If the principle multiplicity is 0..1 or 1, the dependent multiplicity is 0..1, and the dependent end is nullable, the OData Producer will now remove the existing link. ================(Build #2256 - Engineering Case #796206)================ If the OData Producer received a request that had a chunked transfer encoding, it could have reported that the request contained no data and fail. This has been fixed. ================(Build #2251 - Engineering Case #795921)================ If a new user made many parallel requests and the metadata has not been built for that user, the OData Producer will attempt to build the same metadata in parallel but only keep one copy. This has been fixed. ================(Build #2247 - Engineering Case #796574)================ The OData server and OData Producer servlets have been upgraded to use Jetty 9.2.4 and version 3.1 of the servlet API. ================(Build #2240 - Engineering Case #795072)================ Attempting to do a POST or PUT to modify a link using $links, where one of the ends had multiplicity 0..1, could have been rejected with an invalid cardinality error. This would have happened when the entity was already linked to another entity and had to be detached in order to be attached to the new one. This has been fixed. If the principle multiplicity is 0..1 or 1, the dependent multiplicity is 0..1, and the dependent end is nullable, the OData Producer will now remove the existing link. ================(Build #2214 - Engineering Case #792761)================ A user’s first request could have been very slow and if there were many users with different access permissions, users would have encountered occasional slow requests. On first request, the OData Producer must build the metadata for that user, which it then caches. If there are many users with different permissions, the cache may unload metadata for a particular user. In this case when that user makes a subsequent request, their metadata must be rebuilt. This has been fixed. The database query for retrieving the metadata has been improved. ================(Build #2179 - Engineering Case #789270)================ The value of the Location HTTP header in responses to POST requests was not properly encoded so that it could be used directly as an URL. This has now been fixed. ================(Build #2151 - Engineering Case #786419)================ The OData Producer may have ignored a directive to accept a media type if it had a quality score of 0. Example: "*/*;q=0". If no other suitable media type was acceptable, the request would have failed with UNACCEPTABLE response. This has been fixed. ================(Build #2151 - Engineering Case #786369)================ The OData Producer would have ignored HTTP ACCEPT headers when formatting error responses. This has been fixed. If a request accepts JSON responses instead of XML, the error will now be returned in JSON. ================(Build #2130 - Engineering Case #783939)================ The XML metadata for service operations used the MethodAccess attribute to denote what HTTP method was allowed but the correct attribute is HttpMethod. Some client libraries that require strict matching of service operation name and methods (such as SAPUI5) would not have been able to use service operations. SAPUI5 works with service operations that have been invoked to return JSON results. This has been fixed. ================(Build #2129 - Engineering Case #783811)================ Service Operations whose underlying database stored procedures containd SQL keywords as names of the result set columns would not have been useable with the option ServiceOperationColumnNames=database. Requests for such service operations would have resulted in HTTP 500 - Internal Server Error. This has been fixed. ================(Build #2116 - Engineering Case #782830)================ Delayed constraint errors (such as CHECK ON COMMIT) would have resulted in internal server errors, resulting in diagnostic files. This has been fixed. Better error messages are now generated based on the time of request. ================(Build #2111 - Engineering Case #782240)================ In rare cases, the Producer may select a read-only or non-existent diagnostic directory when environment variables were improperly set. This has been fixed. ================(Build #2091 - Engineering Case #780142)================ OData producer could have failed to retrieve metadata if the user had tables with the same name as system tables. This has been fixed. ================(Build #2065 - Engineering Case #776900)================ When using Internet Explorer to view results of some requests from the OData Producer, IE reported that it was unable to display the results due to a XML parsing error, or asked “Do you want to open or save odata from localhost?” This has been fixed. Note, this only affected results from some service operations when the output format was XML. ================(Build #2055 - Engineering Case #776140)================ An OData server with multiple producers, would have corrupted each other’s state resulting in a high likelihood of incorrect behaviour. This has been fixed. ================(Build #2046 - Engineering Case #774600)================ When the OData Producer's ConnectionPoolMaximum configuration option was not specified, the producer would have set the value to the maximum number of connections available on the database. This could have potentially starved others from the database. This has been fixed. The default behavior is now to set the value for this option to be half the database’s maximum number of connections. ================(Build #2045 - Engineering Case #773983)================ A request for a raw binary value would have been return with a content-type of (or equivalent) 'text/plain', when it should have be 'application/octet-stream'. This has been fixed. ================(Build #2040 - Engineering Case #773811)================ The OData Server could have crashed on Linux systems when using an INI file and no LIBRARY_PATH was specified in the INI file. This has been fixed. ================(Build #2030 - Engineering Case #772899)================ The OData producer allowed properties and navigational properties to have the same name as the containing type (for example, Entity type T1 could have a property called T1 and a navigational property called T1), contrary to the OData specification. Association names could have the same name as a complex type or entity type. The most common occurrence for this issue would be when the producer generated associations for a database Table that had a self-referring foreign key. While some OData clients ignored these naming restrictions, others such as Microsoft's would not have worked with a service whose metadata contained these name conflicts. This has been fixed. OSDL files with such name conflicts now produce errors, and generated navigational properties and associations are given better names. ================(Build #2030 - Engineering Case #772828)================ If the URI for a service operation was specified with missing input parameters, the OData Producer will now pass a NULL value into the underlying stored procedure or function. If a default value existed for the input parameter to the stored procedure or function, the OData Producer would have used the default value, which was contrary to the OData specifications. ================(Build #2030 - Engineering Case #771964)================ If a service operation had been defined to call a function, attempting to call the service operation would have resulted in an unexpected error in the OData Producer. This has now been fixed. ================(Build #2023 - Engineering Case #772035)================ A stored procedure with a input parameter whose name was an invalid OData identifier would only have encountered problems when the service operation was called. A stored procedure with a input parameter whose name is an invalid OData identifier will now generate an error when the service operation is defined. ================(Build #2022 - Engineering Case #771866)================ Errors in batch change sets where formatted as multipart, contrary to the OData specification. For example: --batch_5f3ccbef-11fe-4dcc-a546-02ba2e746d72 Content-Type: multipart/mixed; boundary=cs_025fc4e4-fd30-4372-8fa4-5b941ee66915 --cs_025fc4e4-fd30-4372-8fa4-5b941ee66915 Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 400 Bad Request RepeatabilityResult: accepted Content-Type: application/json;charset=utf-8 { "error" : { "code" : "30063", "message" : { "lang" : "en-US", "value" : "An entity instance with this key already exists." } } } --cs_025fc4e4-fd30-4372-8fa4-5b941ee66915-- --batch_5f3ccbef-11fe-4dcc-a546-02ba2e746d72-- should be: --batch_5f3ccbef-11fe-4dcc-a546-02ba2e746d72 Content-Type: application/http Content-Transfer-Encoding: binary HTTP/1.1 400 Bad Request RepeatabilityResult: accepted Content-Type: application/json;charset=utf-8 { "error" : { "code" : "30063", "message" : { "lang" : "en-US", "value" : "An entity instance with this key already exists." } } } --batch_5f3ccbef-11fe-4dcc-a546-02ba2e746d72-- This has now been fixed. ================(Build #2015 - Engineering Case #770940)================ When an entity modification request violates a check constraint, the OData producer responded with an internal server error like: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code>30000</code> <message xml:lang="en-US">An unexpected error occurred in the producer. Contact the server administrator for more details.</message> </error> This has been fixed. Check constraints will now return either 30144, 30145 or 30146 errors, similar to unique constraint violations, depending on the type of request (INSERT, UPDATE or DELETE). ================(Build #1931 - Engineering Case #764783)================ GET requests with $top=0 should be allowed on collections but return no result. and an HTTP 500 error. This has been fixed. ================(Build #1918 - Engineering Case #764052)================ Filter expressions using startswith(s1,s2) would have translated to a LOCATE() SQL function call, which does not result in the SQL Anywhere optimizer using indexes. This has been improved so that if the search string is a literal or parenthesized literal, whose length is less than 126, LIKE will now be used If the searched item is a column reference, a LIKE hint will be provided. ================(Build #1915 - Engineering Case #763854)================ Many clients, including .NET, would not have specified the charset on the content type HTTP request header and would assume that the default for JSON and XML requests was UTF-8. The OData Producer always assumed that HTTP requests were ISO-8859-1 based on RFC 2616. This would have resulted in non-ASCII characters being incorrectly read. This has been fixed. The OData Producer now assumes UTF-8 for JSON and XML requests and ISO-8859-1 for plain text requests. ================(Build #1914 - Engineering Case #763804)================ Differing case for Edm.Guid values in the request URL and request body could have caused an UPDATE to fail with an error message that the entity key cannot be modified. This has been fixed. ================(Build #1912 - Engineering Case #763708)================ The OData Producer would have rejected a configuration file that had multiple producers when it mistakenly thought the service roots were duplicates or subpaths of each other (for example, /a and /ab). This has been fixed. ================(Build #1910 - Engineering Case #763548)================ OData requests to update a single nullable property could not have updated the property to null. This has been fixed. ================(Build #1901 - Engineering Case #763011)================ JSON attributes gidh, gurih and g__nexth, XML gidh elements, and XML attributes such as xml:base and href, would not have been encoded properly if they contained non-ASCII characters or ASCII characters that had special meaning in URLs. For example if an entity instance had a string key value of "A#1" and the entity set was called "Orders©" the "#" and "©" would not be properly encoded. While IDs are now properly encoded, titles and names are not. This has been fixed. All hyper link attributes are now encoded to use % escape sequences so they can be used as-is in HTTP requests. ================(Build #1880 - Engineering Case #761750)================ User who have permission to access stored procedures or database functions through roles, may not have access to the corresponding service operations in OData. The affected operations would not have appeared in the metadata, and the user would have received 404 errors trying to invoke them. This has been fixed. As a work around, database administrators should directly grant execute permission on given stored procedures or functions to the affected users. ================(Build #1872 - Engineering Case #761217)================ The OData Producer did not properly parse odata batch request boundaries from the Content-Type header field. In particular, a content type of the form: Content-Type: multipart/mixed; boundary=batch_e30b82d3-3d8a-430d-b66f-9fec1df8ae19; charset=utf-8 would have resulted in a 404 error. This has been fixed. ================(Build #1868 - Engineering Case #760644)================ When using the OData Producer as a servlet in a webserver, the paths were relative to a current directory, which was unpredictable by the servlet. This has been fixed. The OData Producer will now search the servlet’s context first, then the current directory. For security reasons, the producer configuration and model files should be located in the WEB-INF/ directory (or equivalent). This context may now include .WAR files. For OData Producers deployed using the OData Server utility (dbosrv16), paths are relative to the current directory when the server was launched. ================(Build #1822 - Engineering Case #757273)================ Long-form formats like application/json were not supported with the $format query option. This has been corected by adding support for the following long-form constants: application/xml application/atom+xml application/json */* ================(Build #1815 - Engineering Case #756878)================ Improperly encoded URIs could have caused an internal server error. For example, executing the query "http://localhost:8105/odata/Filter03/T6?$filter=c1 eq '%'" against the OData producer, would have caused the following internal server error: HTTP ERROR: 500 Problem accessing /odata/Filter03/T6. Reason: while trying to invoke the method com.sybase.odata.producer.util.RequestWrapper.getParameterMap() of a null object loaded from field com.sybase.odata.producer.handler.AbstractHandler.request of an object loaded from local variable 'this' This has been fixed. Clients will now get HTTP ERROR: 400 Bad Request <?xml version="1.0" encoding="utf-8"?> <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code>30175</code> <message xml:lang="en-US">An error occured while parsing the request URI at offset 59.</message> </error> ================(Build #1810 - Engineering Case #756565)================ Parameters of the function substringof() were used in the wrong order, and has now been corrected. The intended meaning of substringof( s1, s2 ) was “return whether or not s1 is a substring of s2”. The OData server was doing the opposite, “return whether or not s2 is a substring of s1”. ================(Build #1803 - Engineering Case #756038)================ If an OData server was shutdown and immediately restarted, it could have failed to connect to its port or shutdown port. This has been fixed. ================(Build #1803 - Engineering Case #756026)================ The following new features have been added to the OData Producer. Please see documentation for full details. - The OData Producer now supports Optimistic Concurrency Control as defined by versions 2.0 of the OData Specification. Using the OSDL model definition file (see Example 1), a developer may specify a set of properties on an Entity Type to define the concurrency token of the Entity. - The OData Producer now supports Service Operations using HTTP GET and POST methods as specified by version 2.0 of the OData Specification. Developers must declare explicitly what service operations to expose in an OSDL file. - OData Producer uses less objects when caching metadata for users that have (from the OData Producer’s perspective) identical access. - OSDL files support added escape sequences in quoted strings - Associations are now annotated as referential constraints in the metadata and association properties of referential constraints are now visible by default. Association may also have OnDelete attributes in the metadata which documents how the dependent entity instance is affected when the associated principal entity instances is delete. - The log generated by OData Producers has been enhanced to identify the producer and request associated with each log event. - OData server supports multiple producers dbosrv16 can now host multiple producers, each connecting to a different database. All producers share the same options configuration file and are hosted on the same port. The producer configuration file syntax has been augmented for this feature. - The OData producer now supports a greater subset of the OData Service Definition Language (OSDL). The new supported syntax adds the ability to: - Explicitly set the name of tables that are exposed through the producer - Explicitly include/exclude columns - Define entity sets with generated keys - Define associations between entities, including complex associations that use an underlying association table - Define navigation properties ================(Build #1803 - Engineering Case #755797)================ HTTP header names in batch requests were not case-insensitive. This would have resulted in valid HTTP headers being ignored in the parts of a batch request. This has been fixed. ================(Build #1803 - Engineering Case #755513)================ The message used to shutdown the OData server (dbosrv16) was too generic. This has now been chabged from 'shutdown' to 'shutdown_sap_sqla_odata'. Note, older dbostop16 utilities will not be able to shutdown newer OData Server, and new dbostop16 utilities will not be able to shutdown older servers. ================(Build #1803 - Engineering Case #753522)================ If a skiptoken contained nulls, it would not have been parsed properly resulting in an error like: The value for property "{columnname}" in a key predicate is formatted incorrectly. If paging through descending order on a column with nulls, skiptoken may have missed results or restarted at top (thus creating an infinite dataset). This has been fixed. ================(Build #1803 - Engineering Case #753386)================ When following skiptokens, the OData Producer could have been slower to retrieve data than optimal. This has been fixed. ================(Build #1803 - Engineering Case #753296)================ Edm.Decimal values differed in format between EntityIDs and response bodies. This has been fixed. ================(Build #1803 - Engineering Case #753293)================ The SQL Anywhere OData producer was not properly handing JSON requests containing dates before Jan 1, 1970. This has been fixed so that these dates are now handled correctly ================(Build #1803 - Engineering Case #753178)================ Identifiers such as namespace, entity, association, property and service operation names were not restricted in length and allowable characters, as required by the OData specification. This has been fixed. Note, model files may still refer to Database names that are not valid OData identifiers, however they must be renamed using “AS” clause. ================(Build #1795 - Engineering Case #755389)================ A $links request could have failed when a $orderby was used that included a non-key property of the related entity type. This has been fixed. ================(Build #1746 - Engineering Case #751863)================ The CSDL namespace URL in the metadata document was incorrectly referring to CSDL 1.0 using URL http://schemas.microsoft.com/ado/2006/04/edm. It has now been updated to refer correctly to the CSDL 2.0 namespace URL: http://schemas.microsoft.com/ado/2008/09/edm. ================(Build #1717 - Engineering Case #749821)================ This change fixes several issues with regard to property attributes in the metadata document: - The OData Version 2 specification is unclear in the case of the max constant for the MaxLength attribute of a property. Many clients will accept both “max” and “Max” as valid cases, but the Microsoft clients only accept “Max”. In OData version 4+, the spec is clear that the correct case is “max”. To avoid such issues two things have been done: 1. For SQLA and ASE, the MaxLength attribute will always specify the MaxLength attribute with a number in the range 1 to 2^31-1 (to avoid the use of “max” altogether). 2. It is possible for an IQ column to have a value that is larger than 2^31-1 bytes/characters, in which case we must continue to use “max”. The default case will continue to be “max” but the OData server administrator can override the case using the __MaxLengthMaxString option in the config file. - The DefaultValue attribute was being used incorrectly. The Microsoft OData clients expect the DefaultValue attribute of a given type to be a valid value (as would appear in a request URL) for that type. For example, the DefaultValue for an Edm.Int32 must be an 32-bit integer. This is somewhat unclear in the version 2 OData spec but is clearly the stated behaviour in OData version 4+. The DefaultValue attribute was being used to represent the underlying default of the database (for example, an autoincrement column would have DefaultValue=”autoincrement”). Since the DefaultValue attribute is not appropriate for all possible default values that an underlying column in the database can have, we will no longer expose default value information in the metadata document using the DefaultValue attribute. - ASE Money data type columns were reporting a Precision and Scale of 0. ================(Build #1639 - Engineering Case #745267)================ The maximum value allowed for the OData Producer PageSize option was only 1000. This was deemed unreasonably small and has now been increased to 1,000,000. The default setting has not been changed. ================(Build #1584 - Engineering Case #742037)================ Requests to update or delete may return incorrect errors in an environment with many clients modifying entities. ================(Build #1575 - Engineering Case #741432)================ JSON Edm.DateTime values were being returned relative to the local time zone instead of UTC. This has been fixed. ================(Build #1522 - Engineering Case #738147)================ When creating entities in an entity set representing a proxy table in a SQL Anywhere database, without fully specifying key properties, the OData Producer would have given an inaccurate error message (30125 - "One or more properties could not be reset to a default value."). This has been fixed. Now, if any of the missing key properties does not have a default value, a 30124 error ("Cannot set a non-nullable property to null.")will now be returned. Otherwise if the missing key properties have default values, a 30154 error ("Key properties with default values must be explicitly set in this entity set.") will now be returned. ================(Build #1515 - Engineering Case #737416)================ When the OData Producer tried to create an entity against a SQL Anywhere proxy table, the server would have shutdown and the Producer would have failed. This has been fixed. However, the OData Producer for SQL Anywhere will not allow the use of default values in the proxy tables primary key. Clients must specify all primary key properties explicitly when creating new entities. ================(Build #1493 - Engineering Case #735992)================ When computing the Multiplicity of an Association, the OData Producer was not taking unique indexes into account. For example, the tables T8 and T8b are functionally identical but generate different multiplicity: CREATE TABLE T8 ( pk1 INTEGER NOT NULL, c1 INTEGER, PRIMARY KEY (pk1), FOREIGN KEY F_KEY_T8 (c1) REFERENCES T7 MATCH UNIQUE SIMPLE ); CREATE TABLE T8b ( pk1 INTEGER NOT NULL, c1 INTEGER NULL, PRIMARY KEY (pk1)); CREATE UNIQUE INDEX T8bIndex on T8b(c1); ALTER TABLE T8b ADD CONSTRAINT F_KEY_T8b FOREIGN KEY (c1) REFERENCES T7 (pk1) Results in <Association Name="F_KEY_T8"> <End Role="T8_Dependent" Type="SAPSybaseOData.T8" Multiplicity="0..1"/> <End Role="T7_Principal" Type="SAPSybaseOData.T7" Multiplicity="0..1"/> </Association> <Association Name="F_KEY_T8b"> <End Role="T8b_Dependent" Type="SAPSybaseOData.T8b" Multiplicity="*"/> <End Role="T7_Principal" Type="SAPSybaseOData.T7" Multiplicity="0..1"/> </Association> This has been fixed. Table T8b above now returns associations identical to those of table T8. ================(Build #1469 - Engineering Case #733645)================ The Precision and Scale attributes for Edm.Decimal properties in the metadata document may have incorrectly displayed values outside of the allowed range for Edm.Decimals if the underlying DECIMAL or NUMERIC column used a precision greater than 58, or a scale greater than 29 (the maximums as defined in the OData spec). This has been fixed. Note that this is only an issue in the metadata document itself. The actual values for Edm.Decimal properties are enforced to be within the allowed range as defined in the OData spec. ================(Build #1463 - Engineering Case #733438)================ The OData Producer could not access metadata of a SQL Anywhere table (and therefore not do any operations on it) when it, as a result of the user connecting, could only view a subset of the columns. For example, the table ColumnPerm is defined below as owned by dba and user httpAuthUser3 is granted select on only the id and v1 columns CREATE TABLE dba.ColumnPerm( id INTEGER NOT NULL DEFAULT AUTOINCREMENT, v1 VARCHAR(128) NOT NULL, v2 VARCHAR(128) NOT NULL, PRIMARY KEY( id ) ) go GRANT SELECT( id, v1 ) ON dba.ColumnPerm to httpAuthUser3 go This has been fixed. When an OData Producer connects using httpAuthUser3 (in the example above), it will see meta data for table ColumnPerm with columns id and v1 (but not v2). ================(Build #1453 - Engineering Case #733096)================ OData filters running against SQL Anywhere databases, using startswith(), substringof() and indexof(), with long search strings would have returned nothing. To correct this, search strings are now restricted to 254 bytes for SQL Anywhere databases. If longer strings are supplied, searches will only use the first 254 bytes. This restriction does not apply to OData queries against ASE databases. ================(Build #1447 - Engineering Case #731291)================ The producer configuration option ServiceRoot, as used by the OData Server, was ignored and the default /odata was always used. This has been fixed. ================(Build #1443 - Engineering Case #731886)================ The OData server could have started and reported that it was listening on a port that was already in use. This has been fixed. ================(Build #1440 - Engineering Case #731843)================ The OData server could have failed to start because one of the ports (shutdown or server port) could not be used, but the port number was not included in the error message. This has been corrected.

SQL Anywhere - OLEDB Client Library

================(Build #2500 - Engineering Case #808768)================ When using Microsoft SQL Server Integration Services (SSIS, DTSWizard) to move a table from a Microsoft SQL Server database to SAP SQL Anywhere or SAP IQ, the OLE DB provider failed to commit the rows inserted into the table. This problem has been fixed. The OLE DB provider will commit any uncommitted rows, provided that a ROLLBACK has not been performed. ================(Build #2252 - Engineering Case #795979)================ When using the SQL Anywhere OLE DB provider, attempting to move forward more than one record using the Recordset.Move function would have failed if the cursor type was a forward-only no-scroll cursor. This problem has been fixed. ================(Build #2223 - Engineering Case #793846)================ When using a SQL Anywhere OLE DB Linked Server object from Microsoft SQL Server, a COMMIT or ROLLBACK of a distributed transaction would have failed. For example, when attempting to update a row in the Contacts table of the SQL Anywhere demonstration database using Microsoft SQL Server: begin tran t2; update SQLATest.demo.groupo.contacts set surname = surname + t.val from (select 2 i, '???' val) t where id = t.i; commit tran t2; select surname from SQLATest.demo.groupo.contacts where id <= 4; error messages, including one indicating that the OLE DB provider “reported an error committing the current transaction”, were displayed. This problem has now been fixed. Also fixed are nested transactions using ADO and native SQL Anywhere OLE DB. Microsoft SQL Server does not support nested distributed transactions. Note, transactions using Linked Servers are always distributed transactions. ================(Build #1848 - Engineering Case #732743)================ Original description for Engineering case 706876: A Microsoft Data Link Error could have occurred with newer versions of Microsoft software when using the SQL Anywhere OLE DB provider. When the Test Connection button was clicked, the following message would have been displayed when the error occurred: Test connection failed because not all properties can be set. Window Handle (BAD VALUE) Continue with test connection? [Yes] [No] The message was informational only and Yes can be clicked. If the credentials and other connection information were correct, the connection succeeded. This problem has been fixed. Instead of returning a NULL window handle to the Microsoft software, the OLE DB Window Handle property is now marked as unsupported, which removes the warning message. === The solution to this problem was incorrect. Some applications require support for the Window Handle property and terminate if not supported (e.g., ROWSETVIEWER application). The problem has been corrected. The Window Handle property value was improperly described as 32-bit in a 64-bit application. ================(Build #1554 - Engineering Case #740334)================ The ICommandPersist interface methodes, LoadCommand, DeleteCommand, and SaveCommand, did not qualify system tables references with an owner name. This has been corrected. ================(Build #1500 - Engineering Case #736527)================ The OLE DB provider accepts two cbBookmark values, one which is the “short” DBBMK_FIRST/LAST value, and another with the value "4" or "8" depending on the bitness of the provider. The 64-bit provider was flagging "4" was an illegal value for cbBookmark, and the 32-bit provider flagged "8" was an illegal value for cbBookmark. The OLE DB provider should have accepted both values as the length of a bookmark value and fetched the appropriate 32-bit/64-bit bookmark value from memory. This problem affected IRowsetLocate::GetRowsAt, IRowsetLocate::Compare, IRowsetLocate::GetRowsByBookmark, IRowsetLocate::Hash, and IRowsetScroll::GetApproximatePosition, and IRowsetExactScroll::GetExactPosition. It has now been fixed so that both 32-bit and 64-bit providers now support 4-byte and 8-byte bookmark values, in addition to 1-byte values. ================(Build #1490 - Engineering Case #736264)================ A number of improvements and bug fixes have been made to the SQL Anywhere OLE DB provider. 1 - When a column cannot be fetched in its entirety, the status is now set to DBSTATUS_S_TRUNCATED instead of DBSTATUS_S_OK, and length to actual length, not amount fetched. 2 - IRowsetUpdate methods InsertRow/Update now inserts rows in manual commit mode (i.e., commit in batches), rather than autocommit each row. 3 - Improved support for DBTYPE_DBTIME2/DBTYPE_DBTIMESTAMPOFFSET data types. 4 - In order to identify columns that are DEFAULT AUTOINCREMENT, IColumnsInfo::GetColumnInfo now sets the DBCOLUMNFLAGS_ISROWVER bit for those columns. Microsoft defines a column with this attribute as a non-writable versioning column (such as the SQL Server TIMESTAMP) which suits SQL Server. Note, however, that SQL Anywhere supports versioning columns that are writable. 5 - Corrected failure to describe money/smallmoney as DBTYPE_CY (currency type). Also corrected OLE DB schema queries DBSCHEMA_COLUMNS, DBSCHEMA_PROCEDURE_COLUMNS, and DBSCHEMA_PROCEDURE_PARAMETERS results for DBTYPE_CY. 6 - Corrections were made to the schema rowset information (DBSCHEMA_COLUMNS, DBSCHEMA_PROCEDURE_COLUMNS, and DBSCHEMA_PROCEDURE_PARAMETERS) for datetime/time precision and scale. 7 - Corrections were made to “run-time” information for datetime/time precision and scale. 8 - Added "DATETIME" to the list of DBPARAMBINDINFO.pwszDataSourceType types for SetParameterInfo (SQL Server uses this undocumented type name). Type names are usually of the form “DBTYPE_xxx” (for example, “DBTYPE_I4”, “DBTYPE_STR”, “DBTYPE_DBTIMESTAMP”). 9 - Adjusted GetConversionSize values for TIME, DATETIME, DATETIMEOFFSET data types (only 6 fractional digits are supported by SQL Anywhere). 10 - A memory leak caused by failure to free rows whose refcount is 0 in Update() was fixed. 11- A possible memory corruption in calls to IRowsetChange::SetData, IRowsetChange::InsertRow, ISequentialStream::Write, and IRowChange::SetColumns was fixed. 12 - A performance problem when DataConvert was used when no conversion is required was fixed. 13 - A performance issue with SQL_NUMERIC columns, with values comprised of 19 to 37 decimal digits, was fixed.

SQL Anywhere - Other

================(Build #2671 - Engineering Case #814467)================ The SQL Anywhere C API sqlany_get_data(a_sqlany_stmt *, sacapi_u32, size_t, void *, size_t) method could loop forever trying to fetch a blob column from the server if the server returned an error during the fetch. This gave the appearance that the client application or server was hung. This problem also affects Perl, PHP, Python, Ruby, JavaScript and any other application programming interface that uses the SQL Anywhere C API (dbcapi). This problem has been fixed. ================(Build #2454 - Engineering Case #806359)================ The Unix installer had created a C-shell configuration script that incorrectly includes a Bourne shell-style test statement. When run the script gave the error: "Missing ]". This has been fixed. ================(Build #2340 - Engineering Case #802406)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.2i. ================(Build #2320 - Engineering Case #800617)================ If a TLS connection to a database was made using a thread, and the thread disconnected and terminated, then two handles and some memory were lost (leaked). This happened whether connections were pooled or not, and only occurred for TLS encrypted connections. This problem has been fixed. Note that this fix applies to any client API that is capable of creating threads that can connect to the database using TLS. These include .NET, ESQL, JDBC, ODBC, OLEDB, and many others. ================(Build #2306 - Engineering Case #799885)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.2h. In addition, the version of the OpenSSL FIPS library has been upgraded to 2.0.12. ================(Build #2306 - Engineering Case #799884)================ The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.44. ================(Build #2222 - Engineering Case #793527)================ SQL Anywhere software on UNIX platforms required LD_LIBRARY_PATH (or the equivalent for the platform) to be set in order for the loader to find dependent libraries. This made the use of the software inconvenient, especially when interfacing with third-party applications. Additionally, SQL Anywhere software may have failed to find dependent libraries on Mac OS X 10.11 systems, even if DYLD_LIBRARY_PATH was set properly. The new security policy in this version of Mac OS X caused DYLD_LIBRARY_PATH to be unset in certain cases, causing the loader to fail to find libraries. This has been fixed to some degree on all UNIX platforms except AIX. However, some use cases will still need this or some other environment variable to be set. In some cases, user applications will need some adjustment. Specifically: - When using a client API that requires libdbcapi, either LD_LIBRARY_PATH or SQLANY_API_DLL must be set correctly. - When using Java with native libraries, such as JDBC (including when using Java external environment; except on Mac), either LD_LIBRARY_PATH must be set or the -Djava.library.path=/path/to/sqlanywhere/lib64 command line switch must be used. - When using external functions, either LD_LIBRARY_PATH must be set or the full path to the shared library must be provided. - If you use a custom install layout, you may find that LD_LIBRARY_PATH is still needed. On Mac OS X, it is possible to use install_name_tool to provide additional search paths instead of using DYLD_LIBRARY_PATH. ================(Build #2182 - Engineering Case #789607)================ In rare circumstances, the SQL Anywhere installer on Unix could have crashed during an upgrade. This has been fixed. A work around is to uninstall the old SQL Anywhere software and perform a new installation of the new software. ================(Build #2173 - Engineering Case #773002)================ Generated 64-bit MSI installs had the BIN32 directory in the PATH environment before the BIN64 directory. Also, the path contained an extra backslash between the SQL Anywhere directory and the BIN32 or BIN64 directories. Both these problems have now been corrected. ================(Build #2171 - Engineering Case #788292)================ Running Mac OS X installer setup from “Terminal” application may have displayed an incorrect message after the step of selecting the country, such as the following: “Conversion from 'UTF8' to 'ANSI_X3.4-1968' is not supported.” This has been fixed. ================(Build #2147 - Engineering Case #785926)================ On Mac OS X systems, failing to allocate memory could have caused the process to crash. This applied to all processes within the SQL Anywhere product, and has now been fixed. ================(Build #2141 - Engineering Case #785089)================ The library dbrsa16.dll was missing in the client install for SQL Anywhere for Windows. The client install has now been modified to include this file. ================(Build #2130 - Engineering Case #783864)================ The SQL Anywhere Monitor Migrator would have failed when running on machines with a Turkish locale, regardless of the server machine’s locale. This has been fixed. ================(Build #2127 - Engineering Case #783682)================ If an application called the db_locate_servers or db_locate_servers_ex functions more than once, the second and subsequent calls would have failed. This has been fixed. ================(Build #2104 - Engineering Case #781487)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1m. ================(Build #2089 - Engineering Case #779882)================ When running the Unix installer on non-Linux systems, under a non-bash shell, it would have given the error - "local: not found" during startup. The error was non-fatal and the installer would have proceeded. This has been fixed. ================(Build #2073 - Engineering Case #778063)================ The Start Menu shortcut “Download Documentation” has been changed to link to the SCN website. ================(Build #2073 - Engineering Case #778047)================ The Start Menu shortcut “Download Documentation” has been changed to link to the SCN website. For this change to take effect, the system must be rebooted after applying this change. ================(Build #2069 - Engineering Case #777135)================ The version of OpenSSL used by all SQL Anywhere components has been upgraded to 1.0.1k. The FIPS libraries have been upgraded from OpenSSL FIPS 2.0.5 to 2.0.9. ================(Build #2054 - Engineering Case #775928)================ The Interactive SQL utility could have incorrectly parsed a CREATE LOCAL TEMPORARY TABLE statement if it appeared within some other block structure (e.g. a BEGIN ... END block). This caused a problem if a subsequent DBISQL statement (e.g. INPUT, OUTPUT, EXIT) was encountered. The symptom was typically an error message from the database server which reported a syntax error near the first keyword of the DBSIQL statement. This has been fixed. ================(Build #2000 - Engineering Case #768926)================ The installer’s default feature selection on Windows x64, combined with a limitation of InstallShield, resulted in the 32-bit OLEDB driver not being registered as a COM server. This has been fixed. ================(Build #1992 - Engineering Case #767530)================ The SQL Anywhere ODBC sample inside the C example directory (odbc.c) used ODBC 2.0 calls and could not link against the SQL Anywhere Driver Manager for Unix on Unix platforms. This has been corrected so that the sample now utilizes ODBC 3.0 calls and can link against the driver manager successfully. ================(Build #1979 - Engineering Case #760153)================ Creating a Deployment Wizard installation which included the Interactive SQL utility, the MobiLink Profiler or the Console utility, but not Sybase Central, would have failed to include JComponents1600.jar. This has been fixed. ================(Build #1954 - Engineering Case #766224)================ When running the SQLA SP installer in silent mode on Linux, it could incorrectly have given the error "The registration key provided is invalid." This has been fixed. ================(Build #1901 - Engineering Case #763021)================ A dbtools sample was expected to perform a backup of the Demo database as a way to demonstrate how to load and use the dbtools dynamic library. On Mac OS X, the dbtools sample would have failed to load the dbtools dynamic library, so it would return to the shell without performing the backup or producing any output. This has been fixed. ================(Build #1894 - Engineering Case #762606)================ In the DBMirror sample that is included with the SQL Anywhere software, the mirror setup SQL script contained a typographical error so no mirror partner was established. The "mirror" partner merely acted as a copy node. Attempts to connect to the mirror partner would have failed. This problem has been corrected. ================(Build #1833 - Engineering Case #758327)================ When the PHP external environment on Windows was used during an HTTP request (with thread-safe PHP), the PHP process may have crashed or behaved incorrectly. This has been fixed. ================(Build #1739 - Engineering Case #751440)================ When installing SQL Anywhere on Solaris, AIX, HP-UX and Mac OS X systems, some of the 32-bit files were erroneously excluded. This meant that some 32-bit client programs or external environments may not have been able to run. There is no workaround other than to use the 64-bit equivalent of these programs. This has been fixed. The missing files will now be installed along with the SQL Anywhere Client component. ================(Build #1697 - Engineering Case #748541)================ When upgrading a 32-bit SQL Anywhere installation containing SQL Anywhere Monitor, the 64-bit server would have been installed, whether or not it was previously installed. Similarly, upgrading a 64-bit SQL Anywhere installation containing SQL Anywhere Monitor would have caused the 32-bit server to be installed, whether or not it was previously installed. This has been fixed. ================(Build #1697 - Engineering Case #748538)================ Specifying the command line option -nogui after the option -silent when running the UNIX setup script would have caused the -silent flag to be ignored. This has been fixed. The -nogui option now has no effect if -silent is specified at any point. A workaround is to avoid specifying -nogui when doing a silent install. ================(Build #1697 - Engineering Case #748535)================ The Linux GTK and Mac OS X UI installer improperly reported insufficient disk space if there was more than 2 TB of available disk space. This has been fixed. A workaround is to use setup -nogui to avoid the use of the graphical installer. Alternatively, install to a smaller disk drive. ================(Build #1563 - Engineering Case #740795)================ Java is not pre-installed with Mac OS X versions 10.7(Lion) and above. In order to use the administration tools on Mac OS X, Java SE Runtime Environment 7(JRE 1.7) must be manually installed. Without a JRE properly installed, the admin tools silently fail to start. When the admin tools were selected in the install, the user should have been notified that they will need to install JRE 1.7 if it was not already installed. This has been improved by providing the messages from “Install SQL Anywhere” application and/or setup from “Terminal” application. The messages will be displayed if JRE 1.7 is not installed on the system (Mac OS X) when installing the components that require Java. ================(Build #1555 - Engineering Case #740414)================ The SQL Anywhere Extension Agent library (dbsnmp*.dll) was not being installed on a 64-bit systems. The 64-bit install needs to put down the 32-bit dbsnmp*.dll to support Microsoft SNMP, which is a 32-bit only service. This has been fixed. ================(Build #1555 - Engineering Case #740412)================ The 32-bit version of the Version 9 or earlier physical store library (dboftsp.dll) was not installed on a 64-bit OS when the 32-bit Client feature was selected. Furthermore, dboftsp.dll was not installed when the MobiLink or SQL Remote features were selected. This has been fixed. ================(Build #1524 - Engineering Case #738400)================ In the 12.0.1 installer for Mac OS X, there is an information panel at the end of the install that tells users that they must build the UltraLite library from the source files included in the install prior to developing on iPhone or building the samples. This panel was erroneously excluded in the 16.0 GA installer, though it still applies, and has now been added. ================(Build #1522 - Engineering Case #738036)================ Support for callbacks has been added to version 3 of the SQL Anywhere C API. The following function is now available when _SACAPI_VERSION is defined as 3. void sqlany_register_callback( a_sqlany_connection * sqlany_conn, a_sqlany_callback_type index, SQLANY_CALLBACK_PARM callback ); This function can be used to register callback functions. Parameters sqlany_conn A connection object with a connection established using sqlany_connect(). index Any of the following: CALLBACK_START CALLBACK_WAIT CALLBACK_FINISH CALLBACK_MESSAGE CALLBACK_CONN_DROPPED CALLBACK_DEBUG_MESSAGE CALLBACK_VALIDATE_FILE_TRANSFER callback Address of the callback routine. The index parameter values correspond to the index parameter values of the Embedded SQL/DBLIB db_register_a_callback function (http://dcx.sybase.com/goto?page=sa160/en/dbprogramming/db-register-a-callback-esql.html). typedef enum a_sqlany_callback_type { CALLBACK_START = 0, CALLBACK_WAIT, CALLBACK_FINISH, CALLBACK_MESSAGE = 7, CALLBACK_CONN_DROPPED, CALLBACK_DEBUG_MESSAGE, CALLBACK_VALIDATE_FILE_TRANSFER } a_sqlany_callback_type; Callback routines are typed "SQLANY_CALLBACK". This corresponds to the Embedded SQL/DBLIB SQL_CALLBACK type. The a_sqlany_message_type enum is used with message callbacks (index=CALLBACK_MESSAGE). typedef enum a_sqlany_message_type { MESSAGE_TYPE_INFO = 0, MESSAGE_TYPE_WARNING, MESSAGE_TYPE_ACTION, MESSAGE_TYPE_STATUS, MESSAGE_TYPE_PROGRESS } a_sqlany_message_type; Here is an example from SDK\dbcapi\examples\callback.cpp. api.sqlany_register_callback( sqlany_conn, CALLBACK_MESSAGE, (SQLANY_CALLBACK_PARM)messages ); void SQLANY_CALLBACK messages( void *sqlca, a_sqlany_message_type msg_type, int sqlcode, unsigned short length, char *msg ) { size_t mlen; char mbuffer[80]; switch( msg_type ) { case MESSAGE_TYPE_INFO: printf( "The message type was INFO.\n" ); break; case MESSAGE_TYPE_WARNING: printf( "The message type was WARNING.\n" ); break; case MESSAGE_TYPE_ACTION: printf( "The message type was ACTION.\n" ); break; case MESSAGE_TYPE_STATUS: printf( "The message type was STATUS.\n" ); break; case MESSAGE_TYPE_PROGRESS: printf( "The message type was PROGRESS.\n" ); break; } mlen = __min( length, sizeof(mbuffer) ); strncpy( mbuffer, msg, mlen ); mbuffer[mlen] = '\0'; printf( "Message was \"%s\" SQLCODE(%d)\n", mbuffer, sqlcode ); } A complete callback example with two callback routines can be found in SDK\dbcapi\examples\callback.cpp. ================(Build #1522 - Engineering Case #725184)================ SQL Anywhere permits the use of identical foreign key CONSTRAINT names on different tables. Some third-party software tools cannot handle duplicate constraint names. As a result, the sample database demo.db that is shipped with SQL Anywhere has been modified so that it now has unique foreign key constraint names. The constraint name FK_CustomerID_ID in the GROUPO.Contacts table has been renamed to FK_CustomerID_ID2. The constraint name FK_ProductID_ID2 in the GROUPO.MarketingInformation table has been renamed to FK_ProductID_ID2. ================(Build #1520 - Engineering Case #737992)================ The error message that was displayed when help could not be opened for the Console or Interactive SQL utilities was always in English. Now the message can be localized. Also, the title for the message window was incorrectly empty, this has been fixed. ================(Build #1471 - Engineering Case #733922)================ When run on Unix systems, the uninstaller always returned an error code of 1. This has been fixed. ================(Build #1471 - Engineering Case #733915)================ When installing sub-components in silent mode, for example with: setup -silent … -install sqlany64,sqlanyclnt32 the installer may have given an error like: The following option names are invalid or are not exposed by the registration key provided: sqlanyclnt32 sqlany64 Another symptom of the same problem could be seen using the -list_packages switch, for example: setup … -list_packages would have output garbled messages. This has been fixed. ================(Build #1463 - Engineering Case #733443)================ Building the PHP external environment using phpize, or by integrating into the PHP source code, would have failed. On version 16, it will fail to find libdblib at the configure stage. On both versions, it will fail to produce a useable PHP module. This has been fixed. ================(Build #1452 - Engineering Case #732943)================ Various batch and jdp* files may not have been updated by a support package. This has been fixed. ================(Build #1451 - Engineering Case #732615)================ On 64-bit systems, the feature selection option for the 32-bit server feature would not have installed the feature when set to 1 (i.e., SERVER32=1). This has been fixed. ================(Build #1411 - Engineering Case #728972)================ The header for the first column (the row header "Property Name") of the Server and Database property lists could have been truncated if the header text was longer than the longest property name being shown in the table. This problem was readily apparent when running the program in French, but it affected all languages. This has been fixed. The first column is now sized wide enough for the header text and the names of the properties shown in the table.

SQL Anywhere - Server

================(Build #2784 - Engineering Case #817669)================ The server has incorrectly returned the SQL error SQLE_CANNOT_MODIFY if a procedure call in a trigger body took an old row column as INOUT or OUT parameter argument. This has been fixed. To work around the problem you may define the procedure parameter as IN or assign the old row column value to a local variable that you use as procedure argument. ================(Build #2779 - Engineering Case #817630)================ In some circumstances, the server could crash when performing an insert into a table that has a table check constraint. This has been fixed. ================(Build #2775 - Engineering Case #817368)================ Under very rare circumstances, the server may crash when closing a pooled HTTP connection. This has been fixed. To work around the problem plan caching can be turned off (option Max_plans_cached = 0). ================(Build #2773 - Engineering Case #816954)================ The server did not release schema locks if a DROP TABLE, DROP VIEW or DROP MATERIALIZED VIEW statement does not found the object but an same named object of a different object type. For example: If there is a view named X but not a table with that name then a DROP TABLE X would leave a schema lock on view X. Additionally, the server did not return an SQL error if IT EXISTS was not specified and an object with expected object type was not found. This has been fixed. ================(Build #2769 - Engineering Case #817479)================ Under some conditions combining ADD ... WITH DEFAULT alter clause with a non-ADD alter clause can cause data corruption for a non-empty table. The error will most likely manifest as Assertion 200610 "Attempting to normalize a non-continued row". Under different combination of ADD ... WITH DEFAULT and non-ADD clauses the server may crash in the middle of ALTER TABLE statement. This problems have been prevented by temporarily disallowing combination of ADD ... WITH DEFAULT and non-ADD clauses for non-empty tables. Server will report "Table must be empty" error in such situations. This has been fixed. ================(Build #2766 - Engineering Case #817415)================ The sp_parse_json function can be extremely slow when there are null values in first set and many sets follow in the JSON input string. An example of this follows: [{a:10,b:z1,c:null}, {a:11.2,b:z2,c:301}, ...] In this case, the algorithm performance becomes Order N-squared (O(N2)). Instead of returning a result in seconds, it can take several minutes, depending on the number of sets. This problem has been fixed. Also, an incorrect result is returned for sets where the first value is null and subsequent values are integer, floating-point, or Boolean types. Instead of null, the first result is 0. The following is an example: CALL sp_parse_json('tvar', '[{x:null}, {x:1}, {x:2}]'); SELECT tvar[[1]].x,tvar[[2]].x,tvar[[3]].x; This problem has been fixed. If the output row/array variable (argument 1) is defined before calling sp_parse_json, the row/array variable is usually rejected and an error is returned. The following is an example: CREATE OR REPLACE VARIABLE tvar ARRAY OF ROW( a VARCHAR(32), b ARRAY OF ROW( b1 LONG NVARCHAR, b2 LONG NVARCHAR), c BIT, d NUMERIC(5,2) ); CALL sp_parse_json('tvar', '[{a:"json", b:[{b1:"hello", b2:"goodbye"},{b1:"say", b2:"again"}], c:true, d:12.34}, {a:"json2", b:[{b1:"hello2", b2:"goodbye2"},{b1:"say2", b2:"again2"}], c:false, d:56.78}]'); SELECT tvar[[x.row_num]].a AS a, tvar[[x.row_num]].b[[y.row_num]].b1 AS b1, tvar[[x.row_num]].b[[y.row_num]].b2 AS b2, tvar[[x.row_num]].c AS c, tvar[[x.row_num]].d AS d FROM sa_rowgenerator(1,CARDINALITY(tvar)) AS x, sa_rowgenerator(1,CARDINALITY(tvar[[1]].b)) AS y; This problem has been fixed. The sp_parse_json function will now accept a wider variety of predefined output row/array variables. ================(Build #2751 - Engineering Case #816996)================ When the server is running on Windows 2016 server, the operating system was reported as Windows 2012R2. This has been fixed. ================(Build #2750 - Engineering Case #817080)================ Under very rare conditions server, with plan caching enabled, may crash during shutdown with assertion 101426. This has been fixed. ================(Build #2750 - Engineering Case #817036)================ In very rare circumstances, the server may return assertion error 201503 when running an delete on a table with indexes. This has been fixed. ================(Build #2748 - Engineering Case #816858)================ In very rare circumstances, the server could crash if a function or procedure created with EXTERNAL NAME 'native-call' returns the special FLOAT or DOUBLE values NAN, INF, and INFINITY and the value is used in an SQL expression. The problem does not happen if the function or procedure is created as external procedure with EXTERNAL NAME '<call-specification>' LANGUAGE <language-type>. Also the server has incorrectly cleared the SQL error if an function or procedure output parameter value could not be assigned due to a conversion or truncation error. These problems has been fixed. ================(Build #2748 - Engineering Case #774884)================ In very rare circumstances, the server could crash when running a query with EXCEPT ALL for large set data. This has been fixed. ================(Build #2742 - Engineering Case #816843)================ If "validate ldap server" failed because of a search failure, approximately 1k of memory was leaked. This has been fixed. ================(Build #2738 - Engineering Case #816707)================ After upgrading a database with auditing turned on the database tools like dblog and dbtran may show the error message "Log operation at offset <offset1> has bad data at offset <offset2>" for the renamed transaction log file. This has been fixed. ================(Build #2737 - Engineering Case #816285)================ The database cleaner used to always do a commit when it completed, regardless of whether it was required or not. These commits will now be skipped if they are unnecessary. ================(Build #2732 - Engineering Case #816510)================ If overlapping SYNCHRONIZE commands had been executed on the same database server, it was possible that the dbmlsync process spawned by the database engine would have failed to connect back to the database. A call to sp_get_last_synchronize_result() to view the results of the failed synchronization would have shown that an invalid userid or password had been used. This problem has now been fixed. ================(Build #2728 - Engineering Case #816502)================ The version of OpenSSL used by SQL Anywhere has been upgraded to 1.0.2p. ================(Build #2725 - Engineering Case #816182)================ The server may return the assertion errors 201503, 201501, 200608 or others if a REFRESH MATERIALIZED VIEW statement is cancelled or otherwise fails and the server rolls its operations back. The problem only happens if the statement contains the ISOLATION LEVEL clause with other then SHARE MODE, EXCLUSIVE MODE or SNAPSHOT. Immediate materialized views are not effected. This has been fixed. ================(Build #2724 - Engineering Case #816215)================ In rare circumstances, the server could crash when using an invalid index hint. This has been fixed. ================(Build #2724 - Engineering Case #816120)================ In very rare circumstances, the server could be unresponsive while merging the hash tables of a parallel hash join. This may happen if the hash table merge takes a long time and another connection run a DDL statement or checkpoint. This has been fixed. To workaround you may set Max_query_tasks to 1. ================(Build #2724 - Engineering Case #814460)================ In very rare circumstances, the server could return the SQL error "Assertion failed: 106105 Unexpected expression type dfe_Quantifier while compiling" for a query with subselects. This has been fixed. ================(Build #2724 - Engineering Case #800146)================ In very rare circumstances, if a view which cannot be flattened (e.g. a grouped view) is used in a statement and the view's select list is simplified during query rewrite optimization, the SQLA Optimizer may generate an a invalid query plan and the execution of this plan cause a server crash. This has been fixed. ================(Build #2722 - Engineering Case #817361)================ In some cases, the server could choose a less-optimal plan for a query with a join predicate. This has been fixed. ================(Build #2722 - Engineering Case #816228)================ The fix for QTS 811513 introduced a regression in how a histogram over a string column is used to estimate the number of distinct values. This regression could result in a significant under-estimate in certain cases, which could negatively affect the optimization of queries containing grouping and/or join predicates over string columns. This has been fixed. ================(Build #2722 - Engineering Case #814964)================ In some circumstances the SQL Anywhere query optimizer could over-estimate the size of a FK-PK join. While the impact of the over-estimate may have been slight for the particular join affected, the error could multiply through the rest of the join strategy and result in a sub-optimal query plan. This has been fixed. ================(Build #2709 - Engineering Case #816229)================ Some server internal database user functionalities did not perform optimally, which could have been exhibited by slow execution of certain operations on a server that has a large number of database users and a high volume of user activities. This has been fixed. ================(Build #2703 - Engineering Case #815957)================ If the system is running very low on memory, the database server could crash when TLS connections are received. This has been fixed. ================(Build #2703 - Engineering Case #815358)================ The system functions USER_NAME and SUSER_NAME return the SQL error "Value <value> out of range for destination" if the argument does not fit into data type signed integer. This has been fixed. Now the functions USER_NAME and SUSER_NAME take an UNSIGNED INT parameter and USER_ID() and SUSER_ID() return an UNSIGNED INT value. ================(Build #2703 - Engineering Case #815321)================ Under very rare circumstances, the server could crash when closing a cursor on a select that uses a parallelized index only scan. To workaround the problem the customer may set Max_query_tasks to 1. This has been fixed. ================(Build #2703 - Engineering Case #809067)================ In very rare circumstances, the server may return assertion error 104904 while or shortly after running the procedure sa_index_density. This has been fixed. ================(Build #2699 - Engineering Case #815108)================ If a CREATE CERTIFICATE <cert-name> FROM <variable> statement is executed and the string stored in the variable is longer than fits into a database page size then the server writes the variable name instead the variable value into the transaction log. If the transaction log is applied then the variable does not exist and an assertion 100948 is raised. This has been fixed. ================(Build #2699 - Engineering Case #811326)================ Running a TRUNCATE TABLE on a global temporary 'share by all' table might cause a server crash. The server crash can be manifested in different server assertion. Example of some possible assertions: Assertion: 201501 (Page 0xf:0x… for requested record not a table page) Assertion: 201135 (page freed twice) Assertion: 201503 (Record 0x.. not present on page 0xf:0x… ) The key in these assertions is that the page id starts with 0xf. This indicates a temp file page which is where global temporary tables reside. The table would have been created as follows: CREATE GLOBAL TEMPORARY TABLE <table_name> (...) ... SHARE BY ALL. A workaround for this bug is to use DELETE FROM <table_name> statement followed by COMMIT. This has been fixed. ================(Build #2693 - Engineering Case #814791)================ In some circumstances, the server could return an assertion error 200610 when executing an ALTER TABLE that changes the data type of a column that is part of an text index. This has been fixed. ================(Build #2692 - Engineering Case #814984)================ If the server applies changes from a transaction log and the transaction log contains a CREATE CERTIFICATE statement with FROM FILE clause then an assertion error 100948 is returned if the certificate name needs to be delimited. This has been fixed. ================(Build #2687 - Engineering Case #814840)================ Attempting to connect to an LDAP server may fail for server discovery if the LDAP server is configured to require LDAP protocol version 3. LDAPUA is not affected. This has been fixed. ================(Build #2683 - Engineering Case #808572)================ In rare circumstances, the server could crash when executing a recursive query. This has been fixed. ================(Build #2682 - Engineering Case #814431)================ In very rare circumstances, the server could crash or return assertion error 109523 when executing a stored procedure that contains a SELECT statement with ROLLUP, CUBE or GROUPING SET feature. This has been fixed. ================(Build #2671 - Engineering Case #814523)================ The version of OpenSSL used by all SQL Anywhere and IQ products has been upgraded to 1.0.2o. ================(Build #2668 - Engineering Case #814259)================ In very rare circumstances, a query may fail with assertion error 106105 "Unexpected expression type dfe_PlaceHolder while compiling". A workaround is to disable intra-query parallelism for the affected queries (i.e. set option MAX_QUERY_TASKS=1 for the affected query/connection). This has been fixed. ================(Build #2651 - Engineering Case #813678)================ Currently a Microsoft SQL Server table that has a DATETIMEOFFSET column cannot be migrated to a SQL Anywheredatabase. This problem has been fixed. Support has been added for the Microsoft SQL Server DATETIMEOFFSET data type. This data type is represented as TIMESTAMP WITH TIME ZONE in SQL Anywhere/SAP IQ databases. There are several ways to migrate foreign tables to a database. The SQL Central "Migrate Database Wizard" is one of these. ================(Build #2645 - Engineering Case #813342)================ In some rare circumstances, the server could be unresponsive while running a sa_locks procedure call. This has been fixed. ================(Build #2636 - Engineering Case #812925)================ The error message for assertion error 101413 has been improved to provide more information. The new message format is "Unable to allocate a multi-page block of size %lu bytes". ================(Build #2632 - Engineering Case #812883)================ Under some conditions when ALTER TABLE statement with multiple clauses, including at least two ADD with default and non ADD clause, is executed on non-empty table server may assert with "Internal database error *** ERROR *** Assertion failed: 200610(16.0.0.2222) Attempting to normalize a non-continued row". After the server crash database can start up normally. The issue can be worked around by splitting ADD and non-ADD clauses of the ALTER TABLE statement. This has been fixed. ================(Build #2632 - Engineering Case #812813)================ In rare cases queries that use parallel index scan can crash with an assertion indicating bad page lookup. The problem can be worked around by turning off query parallelism. This has been fixed. ================(Build #2618 - Engineering Case #812385)================ When using jConnect with a SQL Anywhere or SAP IQ database server, an attempt to update a column defined as BIGINT using an updateable ResultSet object may fail with the error message "Not enough values for host variables". This problem was introduced in 17.0.6.2783 as part of an update to the TDS protocol support. A temporary work-around may be to use an UNSIGNED BIGINT or INTEGER instead. This problem has been fixed. ================(Build #2606 - Engineering Case #811902)================ The bypass builder was failing to add IS NOT NULL prefilter predicates on comparisons between a nullable column and a nullable expression that was not known at open time. If the bypass built an index plan, the sargable predicate on the column was matching NULL==NULL when the expression evaluated to NULL. This has been fixed. ================(Build #2605 - Engineering Case #812030)================ The SQL function NEWID() may return duplicate values if they are executed below an Exchange query plan node of a parallel query execution. This has been fixed. ================(Build #2604 - Engineering Case #813034)================ When performing a point in time recovery to a provided timestamp, the server may erroneously report that a recovery was being attempted to a point in time earlier than when the original backup was taken. The server was comparing the given timestamp, as provided in UTC or converted to UTC, against the backup's checkpoint timestamp which was in local time. This issue would affect users who are in a time zone ahead of UTC. This issue has been fixed. ================(Build #2602 - Engineering Case #811905)================ The first time a call is made into the Java external environment, an automatic commit can occur. The following is an example. CREATE TABLE test (string char(30)); INSERT INTO test VALUES('one'); SELECT JavaFunc(); ROLLBACK; SELECT * FROM TEST; In this example, the ROLLBACK has no effect because of a COMMIT occurring during execution of JavaFunc. This problem has been fixed. ================(Build #2602 - Engineering Case #811903)================ When DATEADD is used to add/subtract months or quarters to a date/time value and the result should be '0001-01-01 00:00:00.000', an "out of range" error results. The following is an example. SELECT DATEADD( mm, 0, '0001-01-01 00:00:00.000' ); This problem has been fixed. ================(Build #2601 - Engineering Case #811890)================ In rare cases query plans using index scans could be slow or in very rare cases cause a crash with assertion number 200130. This has been fixed. ================(Build #2597 - Engineering Case #810834)================ Poor performance may have happed on queries that involve index scans. The performance hit was more visible when there were many concurrent connections accessing keys that are of small proximity of each other in the index. Other observed behaviors included server lookup and hangs. This has been fixed. ================(Build #2594 - Engineering Case #808352)================ Under rare circumstances, updates on materialized views may cause database assertion 200602. This has been fixed. A workaround is to disable plan caching. ================(Build #2593 - Engineering Case #811731)================ Attempts to grant the SYS role to a user or another role would fail with a permission denied error if the database was previously upgraded from version 12 or below. This has now been fixed. ================(Build #2583 - Engineering Case #811517)================ If no path is specified in the EXTERNAL NAME clause of a CREATE FUNCTION or CREATE PROCEDURE statement for LANGUAGE CLR, the error "Object reference not set to an instance of an object" is issued when the SQL procedure or function is called. The following is an example of a SQL FUNCTION that interfaces to a CLR function. CREATE OR REPLACE FUNCTION clrTable( IN tid INT ) RETURNS BIT EXTERNAL NAME 'TableIDclr.dll::TableID.clrTable(int) bool' LANGUAGE CLR A work-around is to include a file path to the file (for example, .\TableIDclr.dll). This problem has been fixed. ================(Build #2583 - Engineering Case #811510)================ When executing queries with option ANSINULL=OFF, in very rare cases the optimizer could make a poor index choice for a simple single-table primary key lookup query. This has been fixed. When executing simple single-table queries, cost-based query optimization is bypassed in cases where the query parser classifies the query as simple enough to deterministically generate a plan without needing to estimate the selectivity of query predicates (e.g., a simple single-table lookup with a fully-specified primary key). However, when executing with option ANSINULL=OFF, all queries are fully optimized in order to handle the special semantics of NULL values dictated by ANSINULL=OFF. In the case that a query that had been classified as eligible for simple bypass, the resulting cost-based optimization did not take into account the runtime values of query parameters or host variables, resulting in occasional bad index selection due to poor selectivity estimation. The problem would be particularly pronounced for a table with a multi-column primary key where an index exists on a subset of key columns that have a highly skewed key distribution. ================(Build #2575 - Engineering Case #811484)================ The ESQL and ODBC external environment support module, dbexternc17.exe, crashes with a heap corruption error when an input LONG VARCHAR argument is longer than 32752 bytes. The following is an example of a SQL procedure that acts as the interface to an external procedure written in C and the CALL to that procedure that results in a crash. CREATE OR REPLACE PROCEDURE Ctest ( IN inString LONG VARCHAR ) EXTERNAL NAME 'SimpleCProc@c:\\c\\extdemo.dll' LANGUAGE C_ESQL64; CALL Ctest( repeat( 'X', 32753) ); This problem also exists in SQL Anywhere version 16 software (dbexternc16.exe). This problem has been fixed. ================(Build #2575 - Engineering Case #811205)================ In some circumstances, the server could crash running an update on an outer join. This has been fixed. ================(Build #2565 - Engineering Case #810400)================ In some circumstances, the server could return the assertion errors 201501 or 201503 for a validate table with snapshot. This has been fixed. ================(Build #2549 - Engineering Case #810648)================ The sa_get_table_definition built-in system procedure should return the SQL statements required to create the specified table and its indexes, foreign keys, triggers, and granted privileges. Currently, it is not including foreign key constraints. This problem has been fixed. This fix will also revert dbunload to its earlier behavior where the unloading of a subset of tables (-t option) could include foreign key references to tables that are not included in the unload. ================(Build #2543 - Engineering Case #810477)================ In some circumstances, the server could return an misleading SQL error when trying to access a remote procedure for that the user has no permissions. This has been fixed. ================(Build #2537 - Engineering Case #810165)================ If the option temp_space_limit_check has been set to 'On' and option max_temp_space to a non-zero value then the server may not have respected the quota or returned the non-fatal Assertion error 111111 "Sort error - could not add long hash row to run". This has been fixed. ================(Build #2524 - Engineering Case #809863)================ Under exceptional rare circumstances, the server may loop infinitely during an index backward scan. This has been fixed. ================(Build #2524 - Engineering Case #809817)================ In very rare circumstances, the server could crash during rewrite optimization when inferring predicates if an SQL error SQLSTATE_SYNTACTIC_LIMIT was set or the statement had been cancelled. This has been fixed. ================(Build #2520 - Engineering Case #809745)================ On Windows only, if the SQL Anywhere or SAP IQ database server is running locally (that is, not as a service) and the process owner/user logs off or restarts the computer, the database server is not shut down cleanly. When the database is restarted, the database server puts the database through a recovery process. This problem has existed since 16.0.0 GA and does not affect 12.0.1 or earlier versions. A work-around is to manually shut down the database server before logging off or shutting down the computer. This problem has been fixed. ================(Build #2513 - Engineering Case #809676)================ Under some circumstances, the server may have crashed when running the system procedure sa_get_histogram(). This has been fixed. ================(Build #2500 - Engineering Case #808800)================ In some circumstances, the server could crash when executing procedure xp_startsmtp. This has been fixed. ================(Build #2497 - Engineering Case #808726)================ If the statement CREATE STATISTICS had been executed for an external table then the server returned correctly the SQL error code -660 but generated the totally ridiculous error message "Query Decomposition: Unknown Stmt Type". This has been fixed. ================(Build #2479 - Engineering Case #807703)================ In the case that a table has a publication on it, all old column values are logged in the transaction log file if an update or delete row operation is executed. When running redo recovery or applying a transaction log file to a database using the -a option the error "Invalid transaction log (id=<num>, page_no=<page>, offset=<offset>): identity value not found" could have been raised. This has been fixed. ================(Build #2472 - Engineering Case #806294)================ Under exceptional rare circumstances, the server may crash during a close cursor if all of the following conditions are true: - The cursor's query use an index on a local temporary table. - The public option auto_commit_on_create_local_temp_index is set to Off (Default). - The option ansi_close_cursors_on_rollback is set to Off (Default). - There was an rollback opening the cursor and before closing it. This has been fixed. To work around the problem one of the above options should be changed to On. ================(Build #2468 - Engineering Case #807349)================ A backwards index scan could skip rows. This has been fixed. ================(Build #2454 - Engineering Case #806478)================ The server crashed on startup if variable SADIAGDIR used a directory with a trailing slash or backslash. This has been fixed. ================(Build #2449 - Engineering Case #806167)================ Under very rare circumstances, the server may hang when executing alter view statements. This has been fixed. ================(Build #2438 - Engineering Case #805783)================ Under exceptional rare circumstances, the server could have hang when executing user event actions with complex action code. This has been fixed. ================(Build #2438 - Engineering Case #805695)================ The server could have failed to recover a CREATE INDEX statement that contained both WITH NULLS NOT DISTINC and IN <dbspace>. This has been fixed. ================(Build #2432 - Engineering Case #805323)================ Under exceptional rare circumstances, a query with very large nested expressions could not have been canceled and other database requests could have been blocked. This has been fixed. ================(Build #2429 - Engineering Case #805455)================ The runtime of procedure sa_get_request_profile could have been very long on large request files. The performance of the procedure sa_get_request_profile has been improved. For existing databases the customer need to run a database upgrade to get the new system procedures. ================(Build #2428 - Engineering Case #805460)================ An incorrect result could have been returned for queries containing a spatial predicate if the optimizer chose a plan that used a multicolumn index that included the geometry column. In certain cases, equality predicates on any columns that precede the geometry column in the index would not have been evaluated, causing too many rows to be returned. This has been fixed. As a workaround, customers can either drop the multicolumn index or else add a query hint to force selection of a different index. ================(Build #2426 - Engineering Case #805247)================ In certain circumstances, a LOAD TABLE could have caused the server to spin at 100% CPU indefinitely. This problem has been fixed. ================(Build #2417 - Engineering Case #804480)================ If an integrated login with a login name containing a backslash character had been added to a database then the schema file reload.sql created from that database would have contain an invalid grant integrated login statement. This has been fixed. ================(Build #2394 - Engineering Case #803976)================ Under rare circumstances, the server had been crashed or returned an assertion error 109523 when sending an SMTP email using xp_sendmail. This has been fixed. ================(Build #2388 - Engineering Case #803188)================ If external environment calls had been made using different external environments, then an error could have occurred. For example, a mix of calls to methods in JAVA and C_ESQL32 external environments, or a mix of calls to methods in C_ODBC64, PHP, and JAVASCRIPT external environments, and so on could have resulted in an error. One example of an error message was "The definition of temporary table 'ExtEnvMethodArgs' has changed since last used". However, other messages related to the temporary table may have been appeared as well. The ExtEnvMethodArgs temporary table is used to communicate argument information between the database server and the external environment. This problem has been fixed. ================(Build #2366 - Engineering Case #803177)================ The server may have returned a procedure result set that does not have the correct schema. The problem happened if the procedure definition has no RESULTS clause and has been loaded during parsing another batch, function, or procedure. To work around the problem the connection option "DescribeCursor=Always" can be used. The problem could also be solved by recompiling the procedure using the statement ALTER PROCEDURE <proc-name> RECOMPILE. This has been fixed. ================(Build #2363 - Engineering Case #803114)================ Under rare circumstances, the server could have crashed when executing a query with a window function. This has been fixed. ================(Build #2361 - Engineering Case #802767)================ For non-TDS clients, parameters can be used in a batch if the parameters are confined to a single statement. However, if the following batch had been prepared and executed, a "Communication error" occurred. BEGIN DECLARE arg1, arg2 VARCHAR(255); SELECT ?,? INTO arg1, arg2; SELECT arg1, arg2; END This problem has been fixed. If the argument values are "Hello" and "there", the result set contains two columns with the values "Hello" and "there". ================(Build #2353 - Engineering Case #802806)================ The server would have incorrectly returned the error SQLE_OMNI_REMOTE_ERROR if the REGEXP search condition was used for proxy tables. This has been fixed. ================(Build #2348 - Engineering Case #802689)================ Under very rare circumstances, the server may have failed with assertion 101417 - "Cross database page access", assertion 200130 - "Invalid page found in index", or others, during database recovery or while applying changes as mirror server. The problem only occurred with DDL operations that used parallel query execution. This has been fixed. The problem can be avoided by disabling parallel query execution for group PUBLIC (set option PUBLIC.max_query_tasks=1). ================(Build #2345 - Engineering Case #802688)================ Server may crash when calling sa_split_list procedure. This has been fixed. ================(Build #2345 - Engineering Case #802672)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.2j. ================(Build #2343 - Engineering Case #802533)================ When calling a web service using the SoapUI tool, the message "400 Bad Request" error is returned. This is caused by the presence of a CDATA section in a parameter value (<![CDATA[ xml-string ]]>). CDATA can be used to imbed an XML string into an XML structure so that it is not parsed as part of the overall XML structure. For example, suppose the following SOAP request was sent to the database server. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:fix="http://url.sample.com"> <soapenv:Header/> <soapenv:Body> <fix:authenticate> <fix:ac_XML><![CDATA[<auth><uid>DBA</uid><pwd>sql</pwd></auth>]]></fix:ac_XML> </fix:authenticate> </soapenv:Body> </soapenv:Envelope> When the database server SOAP parser encounters the CDATA section, it returns an error. This problem has been fixed. The server now treats the CDATA string as plain text. ================(Build #2342 - Engineering Case #802464)================ When trying to call a remote function in a SQL Anywhere database that returns a VACHAR result, the error " Count field incorrect" is returned. For example, suppose the remote SQL Anywhere database server defines the following function. CREATE OR REPLACE FUNCTION DBA.TestFunction( IN arg1 CHAR(255) ) RETURNS VARCHAR(32767) BEGIN RETURN 'Good day'; END; The local database defines a remote server and a cover function to call the remote function, and then calls the remote function as follows: CREATE SERVER rmt CLASS 'SAODBC' USING 'DRIVER=SQL Anywhere 17;DSN=Test17;Server=demo17;UID=DBA;PWD=sql'; CREATE OR REPLACE FUNCTION TestFunction( IN arg1 CHAR(255) ) RETURNS VARCHAR(32767) AT 'rmt..DBA.TestFunction'; SELECT TestFunction( 'Hello' ); An error was returned when the SELECT statement was executed. This problem has been fixed. In the example above, the SELECT statement now returns the expected VARCHAR result. ================(Build #2337 - Engineering Case #801919)================ If communication compression was used with packet sizes larger than around 32K, the client or server could have crashed. This has now been fixed. ================(Build #2336 - Engineering Case #798705)================ Under exceptional rare circumstances, the server may have returned an incorrect result set if all the following conditions were true: - the statement contained user defined functions or stored procedure - the statement was part of a function, procedure, event, or batch - parallel query execution was performed - the parallel subtree of the query plan referenced local variables or function/procedure arguments This has been fixed. A workaround of the problem is to set the option Max_query_task =1. ================(Build #2332 - Engineering Case #801492)================ Under very rare circumstances, the server may have returned an incorrect result set if local SQL variables were used with parallel query execution. This has been fixed. To work around the problem set the option Max_query_tasks = 1. ================(Build #2330 - Engineering Case #801308)================ Under very rare circumstances, the server may have crashed if miscellaneous SQL functions were used in a parallel query execution. This has been fixed. To work around the problem set the option Max_query_task = 1. ================(Build #2327 - Engineering Case #801195)================ A call to xp_startsmtp or xp_sendmail could have caused the server to hang indefinitely. If the server disconnected from the SMTP server and then reconnected, and the SMTP server stopped responding at the wrong time, a hang could have resulted. This has been fixed. ================(Build #2326 - Engineering Case #801152)================ The server could have crashed in the spatial library in certain out-of-memory conditions. This has been fixed. ================(Build #2322 - Engineering Case #801026)================ Additional changes were made to the fixes for Engineering case 800808 to ensure the server shuts down cleanly after a failed xp_sendmail() call has occurred. ================(Build #2320 - Engineering Case #807981)================ If the DBA user is modified to no longer have the SYS_AUTH_RESOURCE_ROLE granted and the database is subsequently unloaded and reloaded, then the DBA user in the reloaded database will incorrectly have SYS_AUTH_RESOURCE_ROLE re-granted. This has been fixed. ================(Build #2320 - Engineering Case #800808)================ The server could have crashed if an application used the system procedure xp_sendmail with a message body that was greater than 256 bytes in length. This problem was introduced by the fix for Engineering case 793866. The crash has now been fixed. ================(Build #2318 - Engineering Case #800705)================ When the source or destination path argument for a file or directory functions like sp_move_directory, sp_copy_directory, or sp_move_file, contained a symbolic link (SYMLINKD), the function may have failed. Consider the following examples where “sqlany” and “sqlany17” are symbolic links for c:\sa17 and c:\sa17.1 respectively (both directories exist): SELECT sp_copy_directory('c:\\sqlany', 'c:\\temp\\sa17'); The above statement would have returned the error “c:\sqlany is not a directory”. SELECT sp_copy_directory('c:\\temp\\sa17', 'c:\\sqlany17'); The above statement would have returned the error “Unable to create directory c:\sqlany17”. If a junction was used instead, there were no errors. This problem has been fixed. ================(Build #2315 - Engineering Case #800491)================ Under rare circumstances, the server would have returned the SQL error "Correlation name not found" for complex queries if they contained derived table blocks and proxy tables. This has been fixed. ================(Build #2314 - Engineering Case #800426)================ Under some circumstances, the LIST function may have caused the server's temp file to grow to a large size. This has been fixed. ================(Build #2309 - Engineering Case #800115)================ If a user had been granted the SYS_AUTH_SA_ROLE and/or the SYS_AUTH_SSO_ROLE, those role grants would have been lost if the database was unloaded and then reloaded. This problem has now been fixed. ================(Build #2304 - Engineering Case #799118)================ The server may have incorrectly returned the error SQLSTATE_BAD_RECURSIVE_COLUMN_CONVERSION if a recursive select statement used numeric expressions that did not have the current default precision and scale. This has been fixed. ================(Build #2303 - Engineering Case #799484)================ The server incorrectly evaluated predicates of the form NULLIF( expr_1, expr_2) IS NOT NULL to false if all of the following conditions were true: - expr_1 was a not-nullable expression (e.g. a not null column) - expr_2 evaluated to NULL - expr_2 was known at open time of the query (e.g. a variable, host variable or the constant NULL). This has been fixed. ================(Build #2303 - Engineering Case #799117)================ Under very rare circumstances, the server may have crashed when executing an recursive query. This has been fixed ================(Build #2301 - Engineering Case #800023)================ SQL Anywhere 16.0 running on HP-UX 11i v2 (11.23) now requires the HP-UX O/S patch PHSS_35055. ================(Build #2300 - Engineering Case #799495)================ Under rare circumstances, expensive statement logging, or statement performance in SQL Anywhere 17, could have caused a client connection to incorrectly return an error. This has been fixed. ================(Build #2299 - Engineering Case #799462)================ Previously, the SQL Anywhere database server integrated login support searched for a user in the Global Groups on the domain controller (identified by the integrated_server_name server option) and Local Groups on the database server computer. Now, it also searches Local Groups on domain controller. For clarification, Windows will only return the names of global groups in which the user is a direct member, or the names of local groups containing global groups in which the user is a direct member. If user userA is listed in global group groupB which is, in turn, listed in global group groupC, then only groupB is returned. Global group groupC is not returned even though it contains global group groupB. If a local group localD contains groupB, then userA is located by indirection in localD. If a local group localD contains groupC, then userA is not located by indirection in localD. ================(Build #2298 - Engineering Case #793866)================ A call to the system procedures xp_startsmtp or xp_sendmail could have caused the database server to hang indefinitely if the SMTP server was not well-behaved. This has been fixed. ================(Build #2297 - Engineering Case #797289)================ If the list of CC or BCC recipients supplied to xp_sendmail was created in SQL using string functions or concatenation, it was possible for the recipient lists to be ignored. This has been fixed. ================(Build #2291 - Engineering Case #798913)================ If a batched insert failed due to a 'duplicate primary key', 'column cannot be NULL' or some other error, then the ODBC driver would have incorrectly stopped processing the batch and returned the error to the application. This problem has now been fixed and the ODBC driver will now attempt to process all of the rows in the batched insert. The driver will return SQL_SUCESS if all rows got inserted successfully and SQL_ERROR if one or more of the rows were not inserted. ================(Build #2291 - Engineering Case #798912)================ Under very rare circumstances, the server may have crashed with "cache page allocation" fatal error. This has now been corrected. ================(Build #2290 - Engineering Case #798668)================ In some cases, creating a circular string could have resulted in the server entering an endless loop. This has been corrected. ================(Build #2279 - Engineering Case #798158)================ Calling the system procedure sa_refresh_text_index(), in the presence of text indexes with names that could only be used as quoted identifiers, could have caused an error to be returned. This has been fixed. Note, this issue could be observed during dbunload –g if text index names contained multibyte characters. A workaround is to manually refresh the text indexes. ================(Build #2276 - Engineering Case #797911)================ The server would have given a 'Table not found' error if an application attempted to create a HANA proxy table and the actual HANA table had a mixed case owner, schema or table name. This problem has now been fixed. Note that with this change the application must now ensure that the proper case is used when specifying owner, schema and table name in the AT clause of the CREATE EXISTING TABLE statement. ================(Build #2276 - Engineering Case #797907)================ In rare cases, attempting to call the system procedure sp_objectpermission() could have led to a server hang. This problem has been fixed. ================(Build #2276 - Engineering Case #797902)================ When processing a statement that returned many rows with a very low per-row cost, it was possible for the total time to be higher than it should have been. This has been fixed. Measured slowdown was about 250 nanoseconds per row returned to the client. ================(Build #2275 - Engineering Case #797805)================ The server could have deadlocked or hung if a dbspace was being extended at the same time as a user-defined event was being loaded or reloaded. This problem has been fixed. ================(Build #2274 - Engineering Case #797752)================ Inserting a round-Earth geometry could have failed with "Error parsing geometry internal serialization” (SQLCODE –1415). This has been fixed. ================(Build #2270 - Engineering Case #797560)================ If a computer running the database server had at least 128 CPUs, connections may have reported incorrect statistics. This has been fixed. ================(Build #2269 - Engineering Case #797365)================ When attempting to use the Upgrade Database wizard to change a database's security model from definer to invoker, the security model would have remained unchanged. This has been fixed. ================(Build #2268 - Engineering Case #797401)================ Under rare circumstances, the database server could have crashed while updating the column statistics at the end of a DML statement. This has been fixed. ================(Build #2268 - Engineering Case #782470)================ Under very rare circumstances, it may have taken a long time to cancel a complex query during optimization. This has been fixed ================(Build #2267 - Engineering Case #797233)================ A query containing a GROUPING function in the HAVING clause, that did not appear elsewhere in the query, could have incorrectly returned a syntax error. This has been fixed. Note, a workaround is to include the expression containing the GROUPING function in the select list. ================(Build #2265 - Engineering Case #797145)================ Under very rare circumstances, the server would have crashed if the GROUP BY clause of a query contained outer references. This has been fixed ================(Build #2262 - Engineering Case #796806)================ If a query contained a ROLLUP, CUBE, or GROUPING SETs that included a constant value, calling the GROUPING() function with that constant could incorrectly have given a syntax error. This has been fixed. ================(Build #2262 - Engineering Case #796705)================ An authenticated server may have given authentication errors to connections, even though the authentication string was a valid string provided by SAP. This has been fixed. ================(Build #2260 - Engineering Case #796738)================ The server may have returned an incorrect result set if a query had an inner query block with a GROUP BY CUBE or ROLLUP and an outer query block had predicates in the WHERE clause. This has been fixed. ================(Build #2259 - Engineering Case #796579)================ It was possible for the server to crash when sp_parse_json was executed using input that contained mismatched data types, where one type was a null and the other type was an object or an array. For example, the following would have crashed the server: [ {a: null}, {a: {b:1} } ]. This has now been fixed. A workaround is to ensure that all objects within an array have exactly the same data type. In the previous example, it could be fixed by changing the input to: [ {a: {b:null} }, {a: {b:1} } ]. ================(Build #2255 - Engineering Case #796262)================ On Unix systems, if a server was started with the –ud option, and that server attempted to start a database file that was already running on another server (with a different name), the new server may have crashed on shutdown. The reported error message also did not correctly indicate that the database file was in use. This has been fixed. ================(Build #2254 - Engineering Case #796139)================ Under very rare circumstances, the SQL Anywhere server could have crashed when executing a complex query with large number of threads executing in parallel. This problem has now been fixed. ================(Build #2253 - Engineering Case #796081)================ On systems running Microsoft Windows, the server may have crashed on startup when attempting to obtain disk drive parameters if the disk driver did not properly implement the IOCTL_STORAGE_QUERY_PROPERTY correctly. When successful, the information returned by this system call can be seen using the following SQL query. SELECT DB_EXTENDED_PROPERTY( 'DriveModel' ); This problem has been fixed. If the disk drive parameters cannot be determined, the drive model will now be “Unknown”. ================(Build #2251 - Engineering Case #795922)================ If a web procedure URI began with “https_fips://” indicating that HTTPS should be used with the FIPS-certified libraries, calling the procedure would result in SQLCODE -980, “The URI ‘<uri>’ is invalid”. This has been fixed. ================(Build #2251 - Engineering Case #795917)================ Certain assertion numbers could have been raised in more than one situation. This has been fixed so that assertion numbers are now unique. ================(Build #2250 - Engineering Case #795751)================ In very rare cases, cancelling a statement that processed an XML document could have taken a long time. This has been fixed. ================(Build #2249 - Engineering Case #795599)================ Under very rare circumstances, the server may have crashed during a database cleaner run if there had been tables dropped and views created shortly before. This has been fixed. ================(Build #2248 - Engineering Case #795609)================ The SQL functions NUMBER(*) and RAND() may have returned duplicate values if they were executed below an Exchange query plan node of a parallel query execution. This has been fixed. ================(Build #2246 - Engineering Case #795546)================ If a server was using the -zoc switch to log web procedure calls, and a web procedure that used chunked encoding was called, the server could have crashed. This has been fixed. ================(Build #2246 - Engineering Case #794511)================ Under very rare circumstances, the server may have crashed while receiving host variables from a TDS based connection if the receiving TDS token stream violated the TDS protocol definition. This has been fixed. The server will now return a SQLSTATE_COMMUNICATIONS_ERROR error in this situation. ================(Build #2242 - Engineering Case #795335)================ In very rare cases, the server could have crashed while closing a connection that made external environment calls to a connection scoped external environment. The problem would show up if the external environment had open cursors at the time the connection was closed. The problem has now been fixed ================(Build #2241 - Engineering Case #795198)================ When using a JSON data structure containing empty arrays (represented in a string as ‘[]) as input to the sp_parse_json procedure, it was possible for the server to crash. This has been fixed. ================(Build #2237 - Engineering Case #794878)================ The dbmanageetd tool can be used to read and write .etd files. When used to write files in ETD format, some trace event records were written improperly, generating files which could not be read. This has been fixed. ================(Build #2236 - Engineering Case #794593)================ Incorrect results could have been returned if SQL SECURITY INVOKER user defined function was invoked multiple times in a single statement, with at least two calls being made by different users. For example, the issue would have occurred if the same UDF was invoked from a view referenced in a query, and in the SELECT list of the query directly. This has been fixed. ================(Build #2233 - Engineering Case #794531)================ When creating a foreign key with an ON DELETE SET DEFAULT or ON UPDATE SET DEFAULT action on a column with no default value, the error message returned by the server would have failed to reference the table name: “Constraint '<column>' violated: Invalid value for column '<table>’ in table '???'”. This has been fixed so that the table name is now referenced. ================(Build #2231 - Engineering Case #794343)================ The server could have crashed executing a spatial query in a low memory situation. This has been fixed. ================(Build #2223 - Engineering Case #793824)================ Under very rare circumstances, the server may have crash when using a RANK aggregate function. This has been fixed. ================(Build #2222 - Engineering Case #793674)================ When processing a statement that contained a subselect expression where the select-list item used a LIST or COUNT aggregate, it was possible for the statement to fail assertion 106901 - "Expression value unexpectedly NULL in write". This has now been fixed. ================(Build #2221 - Engineering Case #793370)================ The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43. ================(Build #2220 - Engineering Case #792816)================ The server may have failed the non-fatal assertion 102604 - "Error building sub-select" if a query contained a distinct that could have been eliminated, the query cursor was not declared with read-only, and there was a publication with an subselect in the Subscribe-by clause. This has been fixed. ================(Build #2217 - Engineering Case #792898)================ The server may have crashed or failed assertion 109512 - "Freeing already-freed memory" during a DROP ROLE or DROP USER statement if there were multiple extended grants (e.g. SET USER and CHANGE PASSWORD). This has been fixed. Note, a workaround is to revoke extended grants before dropping a Role or User. ================(Build #2217 - Engineering Case #792313)================ The server can perform a fast TRUNCATE TABLE if the table is referenced by foreign key tables and all the foreign key tables are empty. Under some circumstances a fast truncate was not being executed. This has been fixed. ================(Build #2214 - Engineering Case #792643)================ If a tenant database (X) running in a cloud had web services available, a cloud server that was NOT running X could have crashed if it received an HTTP request for X. This has been fixed. The request will now be redirected to the server running X. ================(Build #2213 - Engineering Case #792925)================ If an execution plan executed a subquery (a subselect expression, EXISTS, or ANY/ALL) many times, and the subquery was very cheap, then the overall execution time of the query was higher than it could have been. This has been fixed. ================(Build #2212 - Engineering Case #792498)================ Under very rare circumstances, the server may have failed assertion 104904: "latch count not 0 at end of request", or others, after executing a REORGANIZE TABLE statement with PRIMARY KEY, FOREIGN KEY or INDEX clause, or after shrinking an index. This has now been fixed. ================(Build #2212 - Engineering Case #792266)================ The UPDATE statement [SQL Remote] is executed by the Message Agent of SQL Remote to determine existing and new recipients of the rows in a table: UPDATE table-name PUBLICATION publication-name { SUBSCRIBE BY subscription-expression | OLD SUBSCRIBE BY old-subscription-expression NEW SUBSCRIBE BY new-subscription-expression } WHERE search-condition expression : value | subquery The statement does not modify any of the rows in the database, but puts records in the transaction log to indicate movement of rows from or to a recipient. Since this type of UPDATE statement does not modify any rows, it should not execute any BEFORE or AFTER triggers. Before this change it improperly called BEFORE UPDATE triggers on the target table, leading to wasted work in some cases. This has been fixed, BEFORE UPDATE triggers are no long called for this type of statement. ================(Build #2209 - Engineering Case #792263)================ In some situations, creating an index on a very large table could have caused the server to appear to be hung. This condition did go away once the index was created. The is now been fixed. ================(Build #2209 - Engineering Case #792227)================ Some valid round-earth geometries could have failed to input properly, either giving an error, failing an assertion, or causing a server crash. This has been fixed. ================(Build #2208 - Engineering Case #792221)================ If the server encounters a fatal database error it then writes a minidump file. During this process the server may have overwritten this minidump file, or created another minidump file due to a crash when freeing static data. This has been fixed. ================(Build #2206 - Engineering Case #791615)================ Temporary file names for the server and various utilites were generated using a standard library function that may have produced somewhat predictable file names. These predictable temporary file names could have been exploited in various ways. Collisions between processes or threads were also possible and could have resulted in undesirable behaviour. This has been fixed. A workaround that mitigates most of the issues is to set SATMP to a location that is only writable by the engine and other trusted users. ================(Build #2205 - Engineering Case #792037)================ In very rare cases, the server could have crashed dereferencing a bad pointer or connections could have failed to unblock. This has been fixed. ================(Build #2204 - Engineering Case #791896)================ In very rare cases the server may have failed assertion 201501: “Page X:Y for requested record not a table page”. This has been fixed. ================(Build #2202 - Engineering Case #791754)================ In very rare timing dependent cases, recording event tracing could have resulted in the server crashing. This has now been fixed. ================(Build #2201 - Engineering Case #791667)================ The function ST_PointOnSurfac()e requires an ST_Polygon or ST_MultiSurface as input. Similarly, the function ST_IsRing() requires an ST_LineString as input. Using these functions on valid geometry types may have resulted in an error indicating that the geometry type was incorrect. This has been fixed. ================(Build #2201 - Engineering Case #791665)================ When creating a LineString with a round-earth SRS, points that were 180 degrees longitude apart were rejected as being nearly antipodal, even if they were physically close together. For example, the following geometry would have failed to load, even though it is a relatively short line: LineString (-180 -84, 0 -90). This has been fixed. ================(Build #2201 - Engineering Case #791283)================ Under rare circumstance, the server could have crashed when executing a statement involving a stored procedure or user defined function defined with SQL SECURITY INVOKER. This has been fixed. ================(Build #2201 - Engineering Case #790722)================ Under very rare circumstances, the server may have crashed or failed an assertion "Assertion failed: 109512 Freeing already-freed memory". This has been fixed. To work around the problem, plan caching can be turned off (option Max_plans_cached = 0). ================(Build #2200 - Engineering Case #791554)================ Zero-length LineStrings were not handled properly by the set operations, ST_IsSimple, and ST_Buffer. Passing such a LineString to ST_Buffer may have caused the server to fail an assertion. This has been fixed. ST_IsSimple now returns TRUE if there are only two points in the LineString. LineStrings containing more than two points that are also zero-length are not considered to be ST_IsSimple. Set operations now treat the zero-length LineString as a single point. Zero-length segments within a given LineString whose overall length is non-zero are ignored. ================(Build #2199 - Engineering Case #791165)================ In some situations, when a table had hundreds of Foreign Key constraints defined an insert in that table may have caused a server crash. Behavior has now been changed to throw an error instead. ================(Build #2196 - Engineering Case #788462)================ The server may have incorrectly returned the error "Function or column reference to 'rowid' must also appear in a GROUP BY", when a select with aggregation had a correlated subquery in its select list and the subquery contains an outer join that returned constants from the null-supplying side. For example: select ( select sum(T2.b2) from T2 left outer join ( select 1 as x from T3 ) V3 on 1=1 where T1.a1 = T2.a2 ) as Z, count(*) from T1 group by T1.a1 This query below has a main query block with GROUP BY T1.a1, a subquery with alias Z, and an outer reference to the subquery using T1.a1. The null-supplying side of the outer join V3 returns a constant "1 as x". This has been fixed. ================(Build #2193 - Engineering Case #790670)================ Accessing a proxy table mapped to a remote Oracle table which had special characters in its name (such as ‘/’, ‘$’, ...) was reporting syntax errors such as ORA-0903 and ORA-0933. The problem was due to the table identifiers not being delimited properly, which has now been fixed. ================(Build #2191 - Engineering Case #790600)================ In some cases, UPDATE statements that included SET <variable> = <expression> could have failed to evaluate the expression for the variable, setting it to NULL instead. This has been fixed. A workaround is to issue a separate query before issuing the update. For example, UPDATE T SET @var = T.x, T.y = 4 WHERE T.z=1 becomes SELECT T.x INTO @var WHERE T.z=1; UPDATE T SET T.y = 4 WHERE T.z = 1; ================(Build #2191 - Engineering Case #790589)================ In very rare situations, a server could have crashed if an application that made a connection-scoped external environment call closed the connection while the server machine was under heavy load. This problem has now been fixed. ================(Build #2190 - Engineering Case #790543)================ Under rare circumstances, cursors with a cached query plan could have caused a memory leak if diagnostic tracing was used. This has been fixed. ================(Build #2184 - Engineering Case #789852)================ If a server was running on a Unix system with multiple network adapters and the MyIP parameter was used with a link-local IPv6 address (i.e. one that begins with “fe80::”), clients may not have been able to find the server using TCP/IP. This has been fixed. ================(Build #2183 - Engineering Case #789740)================ The server may have returned a sequence value for CURRVAL even if NEXTVAL was never called in the current connection for this sequence. This has been fixed. ================(Build #2183 - Engineering Case #786626)================ The server did not allow the use of sequence.currval as a default column value. This has now been implemented. ================(Build #2180 - Engineering Case #789267)================ The changes for Engineering case 786183 did not completely resolve a problem where domain users explicitly present in the local group were no longer being located. This has been corrected so that local users or domain users that are members of a local group, as well as domain users who are indirectly members of a local group (by virtue of being a member of a global group placed within a local group) are now found and the group name is checked for an integrated login mapping. ================(Build #2174 - Engineering Case #788586)================ Several of the secured feature system procedures like sp_create_secure_feature_key(), sp_alter_secure_feature_key(), etc. have a parameter named auth_key. The documented name of the parameter for the sp_use_secure_feature_key() is auth_key as well, however the actual implementation used a different parameter name. This has been corrected. The parameter name is now consistent with the other secured feature system procedures and the documentation. ================(Build #2174 - Engineering Case #788560)================ When processing a statement that contained a subselect expression with the select list item being either the LIST or COUNT aggregate and a GROUP BY clause that contained only constant expressions or outer references to outer query expressions, it was possible for the statement to fail with the error: Assertion failed: 106901 "Expression value unexpectedly NULL in write" This has now been corrected. ================(Build #2174 - Engineering Case #786099)================ If an application created a temporary table, made several external environment calls that modified that temporary table, dropped the temporary table and then created a new temporary table, the server could in rare cases have hang trying to create the new temporary table. This problem has now been fixed. ================(Build #2173 - Engineering Case #788457)================ Several SQL statements for creating objects accepted both the “OR REPLACE” and “IF NOT EXISTS” clauses at the same time. This has been fixed so that at most one of these two clauses can be used. The following SQL statements were affected: CREATE GLOBAL TEMPORARY TABLE (v17) CREATE MUTEX (v17) CREATE SEMAPHORE (v17) CREATE SPATIAL REFERENCE SYSTEM CREATE SPATIAL UNIT OF MEASURE ================(Build #2173 - Engineering Case #788412)================ If an application made a SQL SECURITY DEFINER procedure call which changed the effective user to something other than the logged in user, and the procedure subsequently made a remote data access request with that different effective user, and if there was no externlogin for that effective user, then there would have been some instances where the remote connection succeeded without the required externlogin. This issue has now been fixed. ================(Build #2173 - Engineering Case #788402)================ When calling a secure web procedure, the database server would have leaked memory. This has now been fixed. ================(Build #2172 - Engineering Case #788401)================ Several SQL statements for creating or altering objects would have accepted some clauses more than once and silently ignored all but the last one. Others would give an unhelpful error message like “Syntax error near ‘(end of line)’ on line 1”. This has been fixed so that duplicate clauses are no longer permitted and will raise error code -1933 in the following statements: CREATE/ALTER FUNCTION (web service) CREATE/ALTER LDAP SERVER CREATE/ALTER MIRROR SERVER CREATE/ALTER ODATA PRODUCER CREATE/ALTER PROCEDURE (web service) CREATE/ALTER SERVICE CREATE/ALTER SPATIAL REFERENCE SYSTEM CREATE/ALTER SPATIAL UNIT OF MEASURE CREATE/ALTER TIME ZONE CREATE/ALTER USER ================(Build #2171 - Engineering Case #786492)================ If a multi-threaded application instantiated separate DbmlsyncClient objects on separate threads, it was possible for the application to have crashed if the Init function was called concurrently on multiple threads. The SYNCHORNIZE command in the SQL Anywhere database engine uses the Dbmlsync API, so concurrent calls to the SYNCHRONIZE command on different connections could also result in a crash of the database server. These issues have now been fixed. ================(Build #2170 - Engineering Case #788247)================ When running a statement with very complex expressions (for example in the WHERE or SELECT clause), it was possible for the server to fail an assertion or a crash when the statement was closed. The complexity of the expression needed was related to the maximum cache size. This has been fixed. ================(Build #2170 - Engineering Case #788197)================ When connecting to an authenticated server using SQL Anywhere tools such as Interactive SQL or SQL Central, executing statements that would modify the database would have failed with the error: "-98 Authentication violation". This problem was introduced by changes made for Engineering case 785757 and has now been fixed. ================(Build #2169 - Engineering Case #788051)================ If a server was running on a Unix machine (other than Mac OS X) with multiple network adapters and the MyIP parameter was used with a link-local IPv6 address (i.e. one that begins with “fe80::”), clients may not have been able to find the server using TCP/IP. This has been fixed. ================(Build #2168 - Engineering Case #788026)================ Under rare circumstances, the server may have crashed, or failed an assertion: “Assertion failed: 109512 Freeing already-freed memory”. This has now been fixed. ================(Build #2168 - Engineering Case #669578)================ When executing particular forms of complex queries with very large expressions, it was possible for the server to fail a fatal assertion. This has been fixed so that these statements now report one of the two following errors: SYNTACTIC_LIMIT 54W01 -890 "Statement size or complexity exceeds server limits" DYNAMIC_MEMORY_LIMIT 54W19 -1899 "Statement requires too much memory during query execution" ================(Build #2167 - Engineering Case #787950)================ If an application executed the following sequence: - a remote procedure call using a different effective user than the current logged in user, followed by - a DROP REMOTE CONNECTION to drop the remote connection created above, followed by - a remote procedure call using a different effective user than the one above then there was a small chance the server would have crashed when the second remote procedure call completed. This problem has now been fixed. It should be noted that this problem can in rare cases manifests itself when the SQL Anywhere Cockpit is used to change the Cockpit settings. ================(Build #2166 - Engineering Case #761650)================ The server may have issued an error, for example "Column <name> not found", if an INSERT, UPDATE or DELETE statement on a local table referenced a proxy table, and the changing table had a publication that referenced in its publication WHERE clause additional tables. This has been fixed. ================(Build #2165 - Engineering Case #668971)================ When attempting starting a second server on an already started database, the second server would have reported permission denied errors. It should have reported instead “Resource temporarily unavailable”. This only happens on HP and AIX. This has now been fixed. ================(Build #2164 - Engineering Case #787592)================ Certain sub and dynamic classes built using a 1.8 JDK could not been installed in the database. This problem has now been fixed. ================(Build #2162 - Engineering Case #787419)================ Invoking a stored procedure that used a temporary table T (declared by invoker) with different definitions of T would have returned an error. The restriction was now been relaxed to allow some mismatch between the table definitions. Note that this is not the recommended use. It is expected that a stored procedure will be using the exact same definition of the temporary table in all executions. ================(Build #2162 - Engineering Case #738277)================ The server may have crashed, or returned unexpected errors, if a SELECT from DML referenced proxy or IQ tables. This has been fixed. ================(Build #2161 - Engineering Case #787340)================ The server would not have started if a server name with spaces was entered in the server startup dialog window. This has been fixed. ================(Build #2160 - Engineering Case #787105)================ Repeatedly executing INSERT statements with a VALUES clause containing two or more rows could have caused a crash in memory constrained environments. This has been fixed. ================(Build #2160 - Engineering Case #785328)================ IF and CASE expressions can be optimized in some cases when used in search conditions. These optimizations can remove unneeded subquery invocations or identify new sargable predicates. In particular, IF expressions are generated when a view V is used in the null-supplying side of an outer join and V contains a column that is a constant. The following changes have been made to provide better performance for queries: 1. If a subselect expression has a LIST or COUNT aggregate in the select list and there is neither a GROUP BY nor a HAVING clause, then the subselect expression cannot be NULL. If the expression is used in the SELECT list, it will be described as not-NULL. 2. When considering a search condition of the form cond IS TRUE where cond cannot be UNKNOWN, then simplify to cond. 3. When considering a search condition of the form cond IS FALSE where cond cannot be UNKNOWN, then simplify to NOT cond. 4. When considering a search condition of the form cond IS UNKNOWN: a. If cond cannot be UNKNOWN, simplify to FALSE b. If cond is a comparison condition of the form c0 = c1 where one input (say c0) cannot be NULL, then simplify to c1 IS NULL. Other comparison relations (<,<=,>=,>,<>) are supported. 5. When considering expr IS NULL: a. If expr is CAST( e1 AS type ) and the cast cannot introduce NULL, simplify to e1 IS NULL b. If expr cannot be NULL, simplify to FALSE c. If expr is known to be the NULL value at open time, simplify to TRUE d. If expr is IF pred THEN lhs ELSE rhs END IF, simplify according to the rules described below. 6. When considering a comparison condition e1 = IF cond THEN lhs ELSE rhs END IF, simplify it as described below. The IF expression may appear on the left or right of the comparison, and all comparison relations are supported. The following table shows the simplified conditions generated for the following condition: IF pred THEN lhs ELSE rhs END IF IS NULL The simplification is only performed in cases where lhs / rhs could not generate an error or where they would necessarily be evaluated. The pred condition must be either a comparison predicate or an IS NULL predicate. Simplified Condition Notes FALSE None of pred/lhs/rhs can be NULL pred IS UNKNOWN lhs/rhs cannot be NULL (pred IS UNKNOWN) OR (lhs IS NULL) lhs == rhs (special case) (pred IS UNKNOWN) OR (pred and lhs IS NULL) rhs cannot be NULL (pred IS UNKNOWN) OR (NOT pred AND rhs IS NULL) lhs cannot be NULL pred pred cannot be UNKNOWN and lhs is known- at-open NULL and RHS cannot be NULL NOT pred pred cannot be UNKNOWN and rhs is known- at-open NULL and LHS cannot be NULL pred AND lhs IS NULL pred cannot be UNKNOWN and rhs cannot be NULL NOT pred AND rhs IS NULL pred cannot be UNKNOWN and lhs cannot be NULL lhs IS NULL pred cannot be UNKNOWN and rhs == lhs (special case) The following table shows the simplified conditions generated for the following condition: e1 = IF cond THEN lhs ELSE rhs END IF Simplified Condition Notes cond AND e1 = lhs The RHS is known to be the NULL value at open time NOT cond AND e1 = rhs The LHS is known to be the NULL value at open time (cond AND e1 = lhs) OR (NOT cond AND e1 = rhs) The cond is either a comparison condition or an IS NULL condition and lhs and rhs are either a known value or a column expression. ================(Build #2159 - Engineering Case #787014)================ Under very rare circumstances, the server could have returned an error, an incorrect result, or entered an infinite loop, if a query contained Transact SQL outer joins in subqueries that were part of a disjunctive clause. This has been fixed. ================(Build #2156 - Engineering Case #786804)================ If an application fetched a result set containing an nvarchar(1024) column from a remote server, then that column value would have been invalid if the original value was exactly 1024 nchar characters in length. This problem has now been fixed. ================(Build #2155 - Engineering Case #786755)================ Procedures and functions that contained at least one input parameter of ROW or ARRAY type and the procedure/function body was a single, query may have incorrectly reported the error “Correlation name not found”. This has been fixed. ================(Build #2151 - Engineering Case #790977)================ Under very rare timing dependent condition, an index that had long hash values could have some assertions (for example: 200114 - Can't find values for row ... in index ...). This has been fixed. ================(Build #2150 - Engineering Case #786305)================ When using Java external environments on Mac OS X systems, the server may not have automatically found the latest installed JRE. This has been fixed. ================(Build #2148 - Engineering Case #786183)================ Engineering case 776698 resolved a problem where a domain group was included in a local group, but users in the domain group were not being located in the local group (via indirection). It introduced a problem where domain users explicitly present in the local group were no longer being located. This problem has been corrected. Indirect lookups are now performed separately from direct lookups. ================(Build #2148 - Engineering Case #786120)================ In very rare cases, the transaction log can become corrupted. The symptoms of the corruption can appear as checksum failures on page 0 of the transaction log. This has been fixed. ================(Build #2148 - Engineering Case #786112)================ When setting the QuittingTime server property using the system procedure sa_server_option(), parsing of the provided date string did not respect the date_order or nearest_century options. The date_order was always assumed to be YMD and the nearest_century was always assumed to be set to 50 , despite any connection, user, or public settings. This has now been fixed. ================(Build #2146 - Engineering Case #785858)================ In some cases, dynamic cache resizing on Linux systems might not have behaved correctly. This has been fixed. ================(Build #2146 - Engineering Case #785851)================ SQL Anywhere installations no longer include PHP drivers. They are now posted to a web page, but the versions posted only include the .0 release of each major/minor version. The PHP external environment attempts to load the external environment DLL that matches the current phpversion(), which includes the release number. Unless the release number is 0, or an appropriate driver was previously installed, the correct driver will not be found and the PHP external environment will fail to start. This has been fixed. If a DLL with the full version number is available, it will be used. Otherwise the DLL with the .0 release number will be used. e.g. PHP 5.6.5 would use the 5.6.0 DLL. SQLA 12.0.1 and 16.0.0 should continue to work as before, but the fix was included to allow for possible future changes. Workarounds include (one of): - rename the SQLA PHP modules to a name that will be found - set up a php.ini file containing the “extension” setting that will load the SQLA PHP modules - compile the PHP drivers in the SDK directory to match your PHP installation ================(Build #2144 - Engineering Case #785640)================ If the Content-Type header begins with "multipart/" but is not "multipart/form-data" (e.g. multipart/mixed), the HTTP server would have returned a 400 error, even though the request itself is valid. This has been fixed. The body of the request is not parsed for these Content-Types, nor is it accessible through HTTP_VARIABLE( ‘body’ ). The body may be accessed through the HTTP_BODY() function. ================(Build #2144 - Engineering Case #785537)================ On Windows systems, if the SQL Anywhere database server was spawned by an application and that application did not include environment strings (in particular, the SystemDrive environment variable), then the database server would not have been able to resolve the location of the ALLUSERSPROFILE folder correctly. The folder path would have contained an unresolved environment string, possibly resulting in misplaced files. A check has now been added for this problem and the current directory will be used instead. ================(Build #2143 - Engineering Case #785450)================ The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1o. ================(Build #2142 - Engineering Case #785330)================ Under some circumstances, running the Index Consultant against workloads that included queries against remote tables may have caused the server to crash. This has been fixed. ================(Build #2142 - Engineering Case #785327)================ When comparing values of type CHAR and NCHAR, SQL Anywhere uses inference rules to determine the type in which the comparison should be performed. Generally, if one value is based on a column reference and the other is not, the comparison is performed in the type of the value containing the column reference. If a view column (v) was defined as a string literal of type NCHAR was used in a query where the same constant string was used elsewhere as an expression (c), and the query had 100 or fewer constants, then a comparison between a CHAR column and the constant literal (c) might have incorrectly failed to use the CHAR type. This has been fixed. Further, when a query contained two tables (say R and S) where one had a CHAR column and the other an NCHAR column (say R.ch and S.nch) and both columns were equated to the same constant, then the server could have improperly inferred that the two columns are equal: R.ch = 'A' AND S.nch = 'A' ==> R.ch = S.nch This inference is not correct. This has been fixed and such conditions are no longer improperly inferred. ================(Build #2142 - Engineering Case #785325)================ When inserting into a table, if the SELECT block contained the sa_rowgenerator procedure, then a work table was used. This has been changed. The work table is no longer generated unless other conditions require it. ================(Build #2142 - Engineering Case #785322)================ When estimating the cost of a join, the server considers any expensive predicates that might be evaluated. For example, if there is a subquery predicate, it will affect the cost of evaluating the join. These expensive predicates were not always included in the cost of evaluating equi-joins. This has been changed so these predicates are considered when estimating the cost of a plan. For a particular customer query affected by this issue, run time reduced from 18,268 sec to 247.8 sec with this optimization. ================(Build #2142 - Engineering Case #785318)================ When using the Plan Viewer tool in dbisql, the "Detailed statistics" executes the plan. In this mode, precise timing is not recorded for every node in the plan in order to minimize the distortion introduced by timing. Nevertheless, more information is available and after this change it is now displayed. Statistics now included for all plans that have been executed: In the graphical plan, if the plan has been executed then every node has at least the following in Subtree Statistics: - Invocations (actual) - RunTime (estimate) - RowsReturned (estimate and actual) If the plan has been executed, every table scan and index scan node has the following: - Total rows read -- rows that were read from the table before applying any search conditions - Total rows pass scan predicates -- if there are scan predicates, this line indicates how many rows passed the scan predicates [otherwise, the line is not included] - Total rows returned -- rows that pass all predicates for the scan and were returned Further, if the plan has been executed then individual predicates show the actual number of evaluations and number of times they were true. Previously this was only shown for “Detailed and node statistics”. If a plan has been executed, the root node now contains the following: - RunTime – the actual active time is always shown. In certain cases it was not available. - ReqCountBlockIO / ReqTimeBlockIO - ReqCountBlockLock / ReqTimeBlockLock - ReqCountBlockContention / ReqTimeBlockContention -- only if request timing is enabled with –zt - CPUTime –- in addition to the estimate, the measured approximate CPU time is now shown - QueryMemMaxUseful and QueryMemLikelyGrant –- these are always included now if the plan was executed. If a plan has been executed, the row counts for each node are now used to determine line thickness in the graphical plan viewer. Previously, these were only available when “Detailed and node statistics” were available. Formatting changes: The title for nodes in the graphical plan now includes the number of rows returned for the node. If the node was invoked multiple times, the invocation count is also displayed. Eg. Table Scan (750 rows/10 invocations) Scan employee sequentially When stored procedures appear in a plan, the correlation name for the procedure is displayed. This allows us to distinguish among multiple instances of the same procedure. If a predicate has a cost estimate (for example, it contains a subquery), then the predicate has a suffix “cost .123 sec” to indicate the estimate cost per evaluation. When generating a text plan (EXPLANATION or PLAN), if the plan has actually been executed (for example, in the RememberLastPlan), then actual row counts and number of invocations are now included. When generating a text plan (EXPLANATION or PLAN), if the plan includes an Exchange, then only the first branch is displayed. There is an indication of how many branches were present. If the plan was executed, then the row count of each branch is included separated by semicolons.p4 ================(Build #2142 - Engineering Case #785292)================ In some contexts, duplicate rows do not affect the result of a query. For example, when generating rows for a UNION DISTINCT operation, duplicates are eliminated. This change modifies the DerivedTable operator so that in contexts where duplicates are not needed, the operator eliminates duplicates eagerly. When the derived table would return a row that is a duplicate of the immediately previous row, it is eliminated. Duplicate detection is based only on the prior row so the cost of detection is low but only rows that are immediately repeated are eliminated. When eager duplication detection is selected for a plan, the graphical plan shows “Eliminate duplicates eagerly yes”. For plans with statistics, the number of duplicates eliminated is shown. For a query of about 1.5 million rows with many duplicate values, this optimization can improve run-time by up to 30%. ================(Build #2142 - Engineering Case #785291)================ INSERT statements did not use parallel execution plans. This has been changed so that parallel plans are now considered for the SELECT block if the other restrictions of parallel plans are met. ================(Build #2142 - Engineering Case #785289)================ If a query contains an ANY or ALL subquery that is not correlated to the outer query block, the server may choose an execution plan materializing all rows of the subquery one time with an index so each row of the outer block can be compared to the stored results. If the subquery also contained a UNION where at least one branch required a work table and at least one branch did not then the plan included work tables under the union for all branches requiring materialization. These were redundant due to the materialization at the root and are no longer included. ================(Build #2142 - Engineering Case #785271)================ When estimating how many rows are returned for an ad-hoc join (one that is not a PK/FK join), histograms on the joined columns are usually used to estimate how many rows will match. When one or both of the columns are declared as unique, histograms were previously not considered and in some cases this caused the number of returned rows to be underestimated due to skew in the inputs. This change includes information from the histograms to increase the estimated number of rows. ================(Build #2142 - Engineering Case #785266)================ During the semantic transformation phase of query processing, the server normalizes and extends predicates in the query in order to find useful search conditions. One step of predicate normalization considers equality predicates that partition values. Consider: R.x = 1 AND T.x = R.x ==> T.x = 1 Before this change, this normalization also inferred join conditions, for example: R.x = 1 AND T.x = 1 ==> R.x = T.x The inferred predicate is correct, but it does not help find a faster way to execute the query. These additional join conditions are no longer generated when the equality partition contains a constant. ================(Build #2141 - Engineering Case #784735)================ Under rare circumstances, a query that generated a large intermediate result set containing strings of medium length (usually in the range of 128-256 bytes long) could have crashed the server. This has been fixed. ================(Build #2140 - Engineering Case #785134)================ Under rare circumstances, a long running, memory intensive query could have caused the server to crash. This has been fixed. ================(Build #2139 - Engineering Case #785008)================ Authentication may fail when using PAMUA. ================(Build #2136 - Engineering Case #784731)================ Under rare circumstances, cancelling a parallel query could have caused a memory leak. This has been fixed. ================(Build #2136 - Engineering Case #784717)================ Attempting to use the SYNCHRONIZE statement while connected to a server running on Linux/ARM would have failed with a “feature not supported” error. This problem has now been fixed. ================(Build #2136 - Engineering Case #784450)================ ST_Distance computations between planar points, or between a planar point and a non-curve line segment, were inappropriately rounded to the nearest multiple of the SRS gridsnap value. Consequently, a measured distance less than the SRS tolerance could have been rounded up to a value greater than or equal to tolerance, which could have caused the predicate ST_WithinDistance to return FALSE for a specified distance of zero, even though the predicate ST_Intersects returned TRUE for the same pair of geometries. This has been fixed. ================(Build #2135 - Engineering Case #780004)================ Under rare circumstances, a query executed using a parallel bloom filter operator could have caused a server crash or an assertion failure - “memory allocation too large”. This has been fixed. ================(Build #2130 - Engineering Case #784051)================ If an application made a remote procedure call or executed sp_forward_to_remote_server(), and the call on the remote server generated an error, then the server could in some cases have given the generic “remote server not capable” error rather than returning the actual error from the remote server. This problem has now been fixed and the original error is now returned. ================(Build #2130 - Engineering Case #783734)================ Under rare circumstances, an AUTO/MANUAL text index operation could have failed to return an error when an error was encountered. This has been fixed. ================(Build #2130 - Engineering Case #782601)================ If the query for which the graphical_plan was being calculated included a reference to a stored procedure that was expected to return a result set, but did not do so, the SELECT graphical_plan( … ) statement would have returned a warning at OPEN time. This has been fixed. Note, the issue could also have affected some queries referencing such a stored procedure. ================(Build #2130 - Engineering Case #773426)================ Under exceptionally rare circumstances, the server would have taken a long time to build the final query plan. This may have occurred for very complex and large queries. During this time DDL statements were blocked, which caused subsequent requests to block as well. This has been fixed. ================(Build #2127 - Engineering Case #783702)================ The server may have crashed when querying for recommended indexes from a tracing database using sa_recommend_indexes(). This has been fixed. ================(Build #2127 - Engineering Case #783691)================ If a web procedure was created whose certificate clause included both a certificate_name and an invalid parameter (for example, misspelled or missing a value), the error may have been reported as “Certificate <certificate name> not found”. This has been fixed. ================(Build #2124 - Engineering Case #783424)================ The predicate normalization phase of query processing was incorrectly done two times for a SELECT under an INSERT statement. This did not cause incorrect results, but in some cases it could cause statements to execute slowly. This has been fixed so that the predicate normalization phase is now applied only once to the SELECT under an INSERT statement. Predicate normalization is performed during the semantic transformation phase. http://dcx.sap.com/index.html#sa160/en/dbusage/queryopt-b-3197621.html*d5e16832 One particular predicate normalization is the handling of equality predicates when the ANSINULL option is set to Off. Note that by default this option is set to Off for Sybase Open Client and jConnect connections by calling the sp_tsql_environment system procedure. When ANSINULL=Off, predicates are transformed during the normalization phase to reflect the Transact-SQL semantics, for example: T.x = @v -> (T.x = @v) OR (T.x is null and @v is null) Additional predicate normalization steps during the Semantic transformation phase simplify conditions in the WHERE clause and identify useful search conditions. These normalizations do not use the current value of columns or variables. During the Pre-optimization phase, further predicate analysis is used to find relevant indexes or materialized views that may be used in the query access plan. At this point, values of variables can be used to simplify the search conditions. drop table if exists R; drop table if exists T; create table R(x int null); create table T(x int null); create or replace variable @V int = 1; set temporary option ANSINULL=OFF; /*Q1*/ select rewrite('select T.x from T as T where T.x=@V -> select T_1.x from T as T_1 where(T_1.x = @V or T_1.x is null and @V is null) When processing an INSERT statement, the Predicate normalization phase was incorrectly applied twice to the SELECT statement that generates rows for the INSERT. While repeating normalization does not give incorrect answers, it can contribute to slower query plans than needed. In particular, the transformation for ANSINULL=Off was repeated, generating a more complex condition than needed: T.x = @v -> (T.x = @V) or (T.x is null and @v is null) -> (T.x = @V) or (T.x is null and @v is null) or (T.x is null and @v is null) Because of the effect of repeating the ANSINULL=Off transformation, this more complex search condition was simplified by later Predicate normalization steps: -> (T.x = @v or T.x is null) and (T.x = @v or @v is null) This more complex condition could, in some cases lead to slower statement execution than needed. In versions of the server before 16.0.3051, the Pre-optimization phase would simplify this more complex condition as follows: (T.x = @v) and (T.x = @v or T.x is null) This simplified condition qualified for index access, but could lead to selectivity estimation problems due to the redundant predicate on T.x, causing the rows returned from T to be underestimated and potentially affecting plan quality. While this type of predicate is generated with ANSINULL=Off, in particular with the issue related to Predicate normalization being repeated under INSERT, the same effect could occur with certain structures of search conditions, such as the following: set temporary option ANSINULL=ON; /*Q2*/ select rewrite('select T.x from T as T where T.x=@V OR T.x is null and @V is null OR T.x is null and @V is null'); -> select T_1.x from T as T_1 where(T_1.x = @V or T_1.x is null) and(T_1.x = @V or @V is null) A related change, Engineering case 775142 (Predicate optimizations improved), providses new pre-optimization capabilities during predicate analysis which simplifies these more complex search conditions, avoiding the selectivity errors. ================(Build #2124 - Engineering Case #783420)================ If a search condition contained integer arithmetic such as the following: select * from sys.dummy where 65536*65536*dummy_col = 0 then it was possible for the statement to be processed without reporting the overflow error. In this case a wrong answer could possibly have be returned. In order for the problem to occur, the overflow must have occurred with a +, -, or * operation on integer constant literals appearing in an expression of a search condition. This has been corrected. ================(Build #2124 - Engineering Case #783410)================ Queries containing the built-in functions HEXTOINT, ROUND, STRTOUUID, and YMD may have incorrectly described an expression as NOT NULLABLE when the option conversion_error=Off and the input to the function was a non-nullable column or expression. If the input value would have caused a conversion error (SQLCODE -157 or -158) when option conversion_error=On, the built-in functions could have returned NULL; as well, the query could have returned an incorrect result if the result of the builtin was used in a predicate or as input to another built-in such as ISNULL or COALESCE. This has been fixed. ================(Build #2123 - Engineering Case #783357)================ On a Linux systems that had been up for a long time, the 32-bit server would have returned LastRequestTime connection properties that were in the future. This has been fixed. ================(Build #2123 - Engineering Case #783348)================ The changes for Engineering case 779078 may have caused the server to incorrectly report the error "Function or column reference to 'col' must also appear in a GROUP BY" for complex queries that contained alias names in the GROUP BY list. This has now been fixed. ================(Build #2120 - Engineering Case #783215)================ In a rare scenario where a cached plan encountered an error, a memory leak could have occurred. This has been fixed. ================(Build #2120 - Engineering Case #782680)================ KBA 2169187: When using the HANAODBC remote server class, HANA statement routing would have not always occurred. This problem has now been fixed. ================(Build #2119 - Engineering Case #783120)================ Some round-earth polygons that should have been rejected due to having 0-area ring, may have been accepted. An example of this is ‘Polygon (0 0, 10 10, 20 20.00001, 10 10, 0 0)’ This has been fixed. ================(Build #2119 - Engineering Case #783112)================ The performance of making external environment procedure or function calls has been improved significantly. ================(Build #2119 - Engineering Case #783067)================ If a DELETE statement with an ORDER BY clause was executed over a table on a remote server, the ORDER BY clause was omitted when the statement was forwarded to the remote server. If an UPDATE statement with an ORDER BY clause was executed over a table on a remote server that did not have the capability “Order by allowed in update”, the ORDER BY clause was omitted when the statement was forwarded to the remote server. These have been fixed. Both statements now return SQLCODE -706 if they contain an ORDER BY clause and the server does not have capability “Order by allowed in update”. ================(Build #2119 - Engineering Case #782945)================ If a Windows application enlisted a connection in a distributed transaction using the Microsoft Distributed Transaction Coordinator, and if a foreign key violation or some error occurred at transaction commit time due to the option 'wait_for_commit' being set to On, then the server would hang. This problem has now been fixed. ================(Build #2119 - Engineering Case #782421)================ The server did not return an error if an INSERT ON EXISTING UPDATE statement violated unique indexes on the insert table and silently deleted violating rows. This has been fixed. ================(Build #2118 - Engineering Case #783019)================ If a DELETE statement in a stored procedure contained an ORDER BY clause, this clause was ignored when the procedure was executed and would be omitted from the procedure definition if queried from the system tables. This has been fixed. ================(Build #2118 - Engineering Case #782872)================ Under rare circumstances, the server could have crashed or returned incorrect results when executing a parallel query with join hash operators with large intermediate results. This has been fixed. ================(Build #2118 - Engineering Case #782604)================ Under exceptionally rare circumstances, the server may have hung indefinitely when executing a regexp_substr SQL function. This has been fixed. ================(Build #2116 - Engineering Case #781783)================ Under exceptionally rare circumstances, the server may have returned the error "The optimizer was unable to construct a valid access plan" if a very complex query contained proxy tables and some table references had the same table name and same correlation name or no correlation name. This has been fixed. ================(Build #2115 - Engineering Case #782698)================ When the server was running on Windows 10, the value of property(‘Platform’) was ‘Windows8’. This has been fixed. ================(Build #2114 - Engineering Case #782677)================ Engineering Case 769059 introduced a small performance regression for wide insert statements. This has now been fixed. ================(Build #2112 - Engineering Case #781752)================ Under rare circumstances, the server could have crashed when canceling a parallel query. This has been fixed. ================(Build #2112 - Engineering Case #780129)================ An ALTER TABLE statement with more than one alter clause – for example, ADD COLUMN and ALTER COLUMN allow NULL – could have returned an unexpected error ‘Column x in table y cannot be NULL’. This has been fixed. A workaround is to execute a separate ALTER statement for each required alteration. ================(Build #2110 - Engineering Case #783287)================ If the Unload utility (dbunload) was being used to do an unload with a reload, it could have crashed if the source database was shut down during the reload operation. This problem has been fixed. ================(Build #2108 - Engineering Case #781925)================ In an HA or scale-out environment, in rare cases it was possible to fail assertion 112002 on the mirror or copynode if the file name that the transaction log was being renamed to was already in use. The server attempts to delete this file if it already exists before the rename, but if it was unable to delete it (if it is already being accessed), the assertion could have occurred. Note that one case where this could have occurred was if there were transaction log files for multiple mirrored databases in the same directory and one of the databases was being started or stopped. It is recommended that each directory containing transaction log files for mirrored databases only contains transaction log files for one database (i.e. never having transaction log files for multiple databases in the same directory). Now the server will retry deleting the file for up to 10 seconds if it is being accessed. ================(Build #2108 - Engineering Case #776698)================ On Windows, if a global group was placed in a server’s local group, a user was a member of that global group, and a mapping existed between the local group and a login user ID, then an Integrated Login would have failed. For example, given that global groups EngineeringA and EngineeringB are members of the local group Engineering and the local group Engineering has an integrated login mapping as follows: GRANT INTEGRATED LOGIN TO Engineering AS USER ENGINEER; Then if the user attempting an integrated login belonged to the global group EngineeringA or EngineeringB, the login attempt should have succeeded, but did not. This problem has now been fixed. ================(Build #2107 - Engineering Case #781800)================ The usage screen for the server was missing the -tq switch. This has been fixed. ================(Build #2104 - Engineering Case #781677)================ A user would have encountered problems when setting the event tracing file target options "flush_on_write" and "compressed", as these options did not accept input values "on" or "off". The only accepted input was "yes", "true", "no", "false". If anything other than these values was provided, no error would be given, and the option was simply set to "off". This has been fixed. Also, an error was wrongly given if first "flush_on_write" was set to "true" and then "compressed" was set to "false". This is a valid configuration, and will no longer give an error ================(Build #2104 - Engineering Case #781332)================ In very rare cases, the server would have crashed with assertion 200509, indicating that a checksum failure of critical sectors of page 0 had failed. This was specific to page 0 of the transaction log, and has now been fixed. ================(Build #2104 - Engineering Case #776458)================ If a Transact SQL CREATE PROCEDURE statement contained a DECLARE CURSOR for CALL procedure then the server would have returned a syntax error near 'execute'. This has been fixed. ================(Build #2103 - Engineering Case #781524)================ If an application executed an INSERT statement that inserted values into a temporary table, and if some of those values came as the result of an external environment function call, then the server could on occasion have failed the insert with a 'table not found' error. This problem has now been fixed. ================(Build #2102 - Engineering Case #781486)================ SQL Anywhere clients and servers now use a new library for LDAP support instead of libsybaseldap[64].dll/so. If deploying the clients or servers with LDAP support, the new library must be included. In addition, a new library will be used by the server when the -fips switch is used. The new library names are (NN is the major version number): Windows: dbldapNN.dll, dbldapfipsNN.dll Unix server and threaded clients: libdbldapNN_r.so Linux server and threaded clients: libdbldapfipsNN_r.so Unix non-threaded clients: libdbldapNN.so ================(Build #2102 - Engineering Case #781387)================ The server would have taken a very long time to parse large parameter lists of function or procedure calls. This has been fixed. ================(Build #2101 - Engineering Case #781313)================ Running the server as follows: dbeng16 -ux -? would have resulted in a usage window being displayed. Upon dismissing that window, the server would have reported: "pure virtual method called." This has been fixed. ================(Build #2101 - Engineering Case #780884)================ In very rare timing dependent cases, concurrently connecting to the server could have resulted in the server crashing. This could have only occurred if the number of connections (both established and in the process of connecting) was greater than it had ever been since the server had started. This has been fixed. ================(Build #2101 - Engineering Case #780632)================ If execution of an event encountered an annotation error – for example, a table referenced in the event did not exist, or the owner of the event did not have permissions to select from a table – the event would have remained invalid until altered, recreated, or until the database was restarted. This has been fixed. The change will allow such an event to now be safely reloaded, however, fixing the actual error (creating the table, giving permissions) will be required before the event can execute successfully. ================(Build #2100 - Engineering Case #781245)================ The LastReqTime connection property was showing the time in the previous time zone even after the time zone was adjusted because of daylight saving time. This has been corrected. ================(Build #2100 - Engineering Case #781239)================ In some cases the server may have crashed while executing the NOTIFY TRACE EVENT statement. This has been fixed. ================(Build #2095 - Engineering Case #780116)================ If the server was shut down just as a backup was starting, the server could have hung and never completed the shutdown. This has been fixed. ================(Build #2094 - Engineering Case #779711)================ When running an archive backup of a large database (greater than 5 GB), the server would appear to be hung. Other connections would also hang and new connections would not be allowed. The server would eventually continue and the backup would complete but the server could be unavailable for several minutes. This has been fixed. ================(Build #2094 - Engineering Case #777195)================ A sync mirror or copynode may not have had the maximum performance if many small transactions were done frequently (multiple small transactions per second). Performance has been increased in this case by configuring the mirror to mirror connections to use larger TCP/IP buffer sizes (which can reduce blocking). Also, on mirror server connections, the SendBufferSize and ReceiveBufferSize protocol options were being ignored. Now these protocol options are respected for mirror server connections. ================(Build #2093 - Engineering Case #780376)================ The SET OPTION statement allowed the user to set min_password_length to a number greater than 255. When this happened, the user was not able to change the DBA password until min_password_length was set to a number within the valid range. This has been fixed so that if the user attempts to set set min_password_length to a number greater than 255, the server will return the error: -201 : Invalid setting for option 'min_password_length' ================(Build #2092 - Engineering Case #677178)================ Under rare circumstances, the server may have crashed if there was a cycle in computed column dependencies and a COMPUTE expression contained a subselect. This has been fixed. ================(Build #2091 - Engineering Case #780137)================ If a statement contained a distinct aggregate (e.g., COUNT( DISTINCT T.x)) and an error was detected while evaluating aggregates, than an improper result might have been returned. The problem has been fixed. ================(Build #2091 - Engineering Case #780131)================ When using unary minus twice in a row (e.g., "- -5"), an error in unparsing could have lead to incorrect behaviour for procedures, views, or statements logged to the transaction log. For example, the following function should return 5: create function F() returns int begin return - - 5; end; It incorrectly returned NULL because the 'return - - 5' is unparsed as 'return--5;' and the '--' is a comment start. Other types of statements could have lead to syntax errors. When unparsing CHECK constraints, the server previously removed one set of extra parentheses: CHECK ( (x < 5) ) „» CHECK( x < 5 ) However, additional pairs were not removed: CHECK ( ((x < 5)) ) „» CHECK( (x < 5) ) This has been changed so that all outer parentheses are removed. ================(Build #2091 - Engineering Case #780124)================ TSQL join conditions such as *= and =* can be used to express outer joins when they are used in the FROM clause. In some queries, they are used improperly in the SELECT list or GROUP BY list (for example, when they are used in IF or CASE expressions). These uses are invalid but were not recognized as such, leading in some cases to answers that did not match the query appeared to intend or in other cases to an error message that did not clearly identify the problem. This has been fixed; the following error is now returned for these types of queries. INVALID_TSQL_JOIN_TYPE 52W24 -681L "Invalid join type used with Transact-SQL outer join" ================(Build #2090 - Engineering Case #779832)================ Under the following conditions and under extremely rare timing the transaction log couldn have become corrupt: - No operation on a table had been written to the log since the database has been started, but a new index on the table had been created - The table had a unique non-nullable index (for example a primary key) - At least two operation were executed in parallel - One of the parallel operations must have been an UPDATE This has now been fixed. ================(Build #2090 - Engineering Case #779828)================ When using a quantified subquery, invalid comparison relations for geometries, such as "<", were permitted even though these are not semantically meaningful. These were evaluated using an internal sorting order for geometries. This has been fixed. Now a comparison such as the following results in an error: select * from T where pt < any ( select pt from t_sp ) or x < 10 The comparison '<' cannot be used with geometries. SQLCODE=-1440, ODBC 3 State="HY000" Further, quantified subqueries now apply the same automatic casting rules as scalar comparisons, allowing a string (WKT, WKB or XML) to be compared to a geometry value. The string is implicitly cast to a geometry before the comparison. ================(Build #2088 - Engineering Case #779827)================ If a database server was configured to listen on multiple TCP ports and to register itself with LDAP, the addresses listed in the LDAP entry only contained one of the port numbers. This has been fixed. ================(Build #2088 - Engineering Case #779487)================ An application that attempted to execute a remote function call that returned a numeric or decimal value would have caused a server crash. This problem has now been fixed. ================(Build #2088 - Engineering Case #778920)================ In some rare situations a redo log could have become corrupted after creating an index on the table that had DML activity. This has been fixed ================(Build #2087 - Engineering Case #779673)================ Under rare circumstances, an aggregate query running on a server with low memory conditions could have failed assertion 106107 - "Unexpected field type ... when trying to create read set". This has been fixed. ================(Build #2086 - Engineering Case #779448)================ The server may have returned the error "Column '???' not found", or "Invalid expression near 'Col1'" for an INSERT, UPDATE, or DELETE statement that was part of a stored procedure if a table column was dropped or renamed and the procedure was in execution at time. This has been fixed. ================(Build #2085 - Engineering Case #779383)================ In rare timing dependent cases, if the primary database was stopped but the database server continued to run (for example by stopping the database with a STOP DATABASE statement), the mirror may have failed to take over as primary. In this case, the message 'Database "<database name>" mirroring: neither partner nor arbiter confirmed this server could become primary' was logged to the console of the server that was attempting to take over as primary. This has been fixed so that the mirror will now correctly take over as primary. ================(Build #2084 - Engineering Case #779152)================ Incorrect information about an INPUT/OUTPUT parameter of a Transact-SQL stored procedure could have appeared in the SYSPROCPARM system table. This has been fixed. Note, the issue did not affect execution of the stored procedure. Recompiling an affected procedure using ALTER PROCEDURE … RECOMPILE would have fixed the catalog information. ================(Build #2084 - Engineering Case #779078)================ In very rare cases, the server may have crashed if the null supplying side of an outer join contained a grouping query block based on constant expressions. This has been fixed. ================(Build #2084 - Engineering Case #771704)================ Invoking a system stored procedure with an invalid named parameter could have succeeded without an error. This has been fixed. For example, this query now returns an error: Select * From sa_locks( garbage=1); ================(Build #2083 - Engineering Case #778136)================ UPDATE and positioned update statements could not have updated row fields and array elements for Transact SQL row and array variables. This has been fixed. ================(Build #2080 - Engineering Case #778534)================ Under rare circumstances, the server could have crashed when processing an inlined stored procedure or function definition on a very busy server. This has been fixed. ================(Build #2076 - Engineering Case #778371)================ In memory-constrained environments, queries containing a subquery which contained a ROLLUP, CUBE, or GROUPING SETS operation could have failed the non-fatal assertion 102501 (Work table: NULL value inserted into not-NULL column). This has been fixed. A workaround is to rewrite the subquery as a join. ================(Build #2076 - Engineering Case #778369)================ If a server was started on a Windows system with the -qw command line option, but no -n server_name option (i.e. the server name was derived from the database name), the server would have incorrectly shown a tool tray text of "??? - SQL Anywhere Personal Server" (or "… Network Server"). The tool tray text is shown when the mouse hovers over the tool tray icon. This has been fixed to display the server name instead of ???. ================(Build #2076 - Engineering Case #777709)================ Creating a view user1.view2 over a view user1.view1 could return an error if view1 referenced a user defined function (UDF) owned by a different user, and user1 had no execute permissions on the UDF. This has been fixed. Note that the permissions on the UDF will still be required when the view is selected from. ================(Build #2075 - Engineering Case #778454)================ Connections to a read-only database could have resulted in a large number of disk reads. This has been fixed. ================(Build #2075 - Engineering Case #778303)================ In rare, timing dependent conditions, the server could have crashed when executing CONNECTION_PROPERTY( 'UtilCmdsPermitted', n ), CONNECTION_PROPERTY ( 'Progress', n ) or CONNECTION_PROPERTY( 'CurrentProcedure', n ) where n was a connection number other than the current connection. Note that these properties are executed as part of calls to the sa_conn_properties system procedure. This has been fixed. ================(Build #2075 - Engineering Case #777696)================ If an Embedded SQL (ESQL) PUT was executed and no column values were specified, a SQL Anywhere database server assertion failure would have occurred, given a simple query on a table with COMPUTED columns and certain cursor types. The following is an ESQL code sample fragment showing a PUT statement. EXEC SQL declare c3 cursor for s3; EXEC SQL open c3 using descriptor sqlda; // Do a put with three columns sqlda->sqlvar[0].sqldata = NULL; // NULL descriptor => insert default value sqlda->sqlvar[1].sqldata = NULL; // NULL descriptor => insert default value sqlda->sqlvar[2].sqldata = NULL; // NULL descriptor => insert default value EXEC SQL put c3 using descriptor sqlda; A PUT such as the one shown above might be used to insert a new row into a table where column values have defaults (for example, DEFAULT AUTOINCREMENT), column values can be NULL, and/or column values are computed. If the prepared SQL query was simple (for example, SELECT * FROM TBL), then the assertion failure could have occurred. The same problem could have occurred in an OLE DB application when inserting an empty row into a table with the characteristics described above. This problem has been fixed. ================(Build #2074 - Engineering Case #778038)================ The server would have crashed if the statement ALTER INDEX REBUILD fail with an SQL error. This has been fixed. ================(Build #2074 - Engineering Case #777958)================ When using the DECRYPT() function with “format=raw” and PKCS5 padding, the decryption could have failed with error -851 “Decryption error: Input must be a multiple of 16 bytes in length for AES” if the ciphertext input came from a string manipulation function such as substring(). This has been fixed. ================(Build #2073 - Engineering Case #777893)================ When constructing a round-earth geometry that crossed the equator and was invalid (e.g. self-intersecting), the server may have reported an error such as INVALID_POLY_RE_SIZE, even when st_geometry_on_invalid was set to ‘ignore’. This has been fixed. There is no known workaround. ================(Build #2073 - Engineering Case #777436)================ If an application made an external environment call, and if that external environment call returned with an open procedure cursor, then the server would have crashed if the application subsequently disconnected without getting the external environment to close the procedure cursor. This problem has now been fixed. ================(Build #2073 - Engineering Case #774595)================ If an application had a trigger that referenced a materialized view, and if that trigger was subsequently fired while making an external environment call, then there was a chance the server could have crashed. This problem has now been fixed. ================(Build #2072 - Engineering Case #777686)================ In very rare cases a database corruption could have occurred if an ALTER TABLE statement was executed following a number of DELETE FROM statements. The corruption would have been reported as assertion 200610 when dbvalid is run on the database. The corruption can be fixed by unloading and reloading the database, or specifically by unloading the affected table, dropping the table, and then creating and reloading the table. This has been fixed. ================(Build #2071 - Engineering Case #777626)================ Specific DDL statements could have caused incorrect behavior when executed concurrently in a procedure statement. The incorrect behaviors include: - Server crash - Server assertion failure - Incorrect error message (for example, -946 "Result set not permitted in '%1'") This has been fixed. ================(Build #2071 - Engineering Case #777602)================ Like the majority of DDL statements, the CREATE EXISTING TABLE requires a catalog lock during execution. Any other server requests are suspended while the catalog is locked. However, the CREATE EXISTING TABLE statement was holding the catalog lock for longer than absolutely necessary. This has been fixed and creating a proxy table now locks the catalog for a much shorter duration. ================(Build #2071 - Engineering Case #777370)================ The server could have crashed if there were unusual operations between a statement prepare and cursor open, as well as other connections performing DDL during the lifetime of the cursor. This has been fixed. ================(Build #2069 - Engineering Case #777385)================ When constructing certain round-earth geometries, the 32-bit server could have crashed. This was more likely for servers with the fix for Engineering case 765031. This has been fixed. There is no known workarounds. ================(Build #2069 - Engineering Case #776816)================ Under some conditions, the server may have crashed when trying to access the property TcpIpAddresses from an event. This has been fixed. ================(Build #2069 - Engineering Case #775392)================ In rare circumstances, when queries were run with snapshot isolation over tables that had had new columns added, the server could have returned incorrect results, or possibly crashed. This has been fixed. A workaround is to ensure that all rows touched by an ALTER have been fully updated inline. This can be done by issuing a pair of statements like the following: alter table T add delete_me integer default autoincrement; alter table T delete delete_me; Snapshot queries over table T will now work correctly after these statements are executed. ================(Build #2068 - Engineering Case #776911)================ The server may crash if an UPDATE on a table that had a publication used a cached plan. This has been fixed. ================(Build #2067 - Engineering Case #777200)================ Under rare circumstances, the server could have crashed when executing a positioned update. This has been fixed. ================(Build #2067 - Engineering Case #777187)================ Creating a proxy table to a HANA table with a large number of rows in the remote table would have taken a very long time. During this time, the SQL Anywhere server would have been locked and would not respond to request until the proxy table creation now much faster. ================(Build #2067 - Engineering Case #777185)================ In extremely rare circumstances, on Unix the database server or a multithreaded client application could have hung on shutdown if there were multiple TCP/IP connections being accessed simultaneously. This has been fixed. ================(Build #2065 - Engineering Case #752756)================ In very rare cases, the server may have crashed on currently unsupported arguments to the row constructor function. This has been fixed. ================(Build #2057 - Engineering Case #776241)================ Under rare circumstances, query plans involving parallel hash joins running in memory-constrained environments may have crashed the server. These query plans may also have been used by internal operators, such as those that do table validation and foreign key building, causing these statements to fail as well. The crash can occur only on servers that contained the fix for Engineering case 769501, as the issues fixed by that change would have masked the issue. This has been fixed. Note, the problem can avoided by disabling parallel query execution (i.e. set option PUBLIC.max_query_tasks=1), and can be made less likely by increasing the amount of memory available to the server. ================(Build #2056 - Engineering Case #776164)================ Calling ST_Buffer on geometries with certain geometric properties could have caused the server to crash due to a stack overflow. In other rare cases ST_Buffer could have failed with ring-not-closed errors, or generated resulting geometries that were (usually only slightly) incorrect. This has been fixed. ================(Build #2055 - Engineering Case #776157)================ Invalid use of the ARRAY clause in an embedded SQL EXECUTE statement could have crashed the server. This has been fixed. ================(Build #2054 - Engineering Case #775938)================ If a spatial object was fetched in JSON format (using FOR JSON AUTO/RAW or ST_AsGeoJSON()), numbers between -1.0 and 1.0 would not have had the leading 0 that JSON requires. This has been fixed. ================(Build #2054 - Engineering Case #775653)================ In rare cases, the server could have crashed if a procedure returned a ROW or ARRAY typed column in its result set, but did not have a RESULT clause. This has been fixed. Note, if a procedure returns a ROW or ARRAY typed column in its result set, the procedure needs to have a RESULT column. Changing the procedure to use Watcom SQL syntax may be required. Calling a procedure that returns a ROW or ARRAY typed column in its result set without a RESULT clause may now result in the error, "Procedure '<name>' needs a RESULT clause for returned ROW or ARRAY." ================(Build #2054 - Engineering Case #775640)================ When executing an ALTER TRIGGER … SET HIDDEN on an INSTEAD OF trigger for a view, an “invalid trigger type for view” error was returned. This has been fixed. Note, this issue also affects creating INSTEAD OF triggers with encrypted definitions on views as well. ================(Build #2053 - Engineering Case #775809)================ In rare, timing dependent cases, the server could have crashed when making a native external function call if a thread deadlock error occurred. This has been fixed. ================(Build #2053 - Engineering Case #775403)================ Under certain circumstances, calling the xp_scanf system procedure could have caused the server to crash. Also, format specifiers other than %s could have given unpredictable results. These have both been fixed. ================(Build #2052 - Engineering Case #774060)================ The server may have crashed or failed assertion 106104 - "Field unexpected during compilation" when using IN list predicates that do not just contain literal constants. This has been fixed. ================(Build #2051 - Engineering Case #775638)================ Hash operations used in query processing (join hash, group by hash, and distinct hash) choose a number of buckets based on the optimizer's estimated number of distinct keys. If the optimizer had an estimate that was far too low, the number of buckets selected could have been too small, leading to slower performance. For example, the following query executed in 25.6s with the incorrect estimate 0.52% on L.row_num >= 0 (actual is 100%): select * from sa_rowgenerator(1,1.1727e6) L join sa_rowgenerator(1,34680) R on L.row_num = R.row_num where (L.row_num >= 0, 0.52 ) options(User_estimates='On') This has been improved so that performance is better even in the presence of underestimates: Before change: 25.6 After change: 5.3 The graphical plan for hash-based query operators now includes lines to indicate how many key values the optimizer estimates will be found and the number of hash buckets selected for execution: Key Values 6098.04 Hash table buckets 1031 ================(Build #2051 - Engineering Case #775550)================ When the procedure st_geometry_dump recursively expanded a geometry of dimension >= 1 defined in a round-earth spatial reference system, the rows returned for internal points were incorrect. For example, the query: select geom from st_geometry_dump( new ST_LineString(new ST_Point(1,1,4326), new ST_Point(2,2,4326))) incorrectly returned the rows: LineString (1 1, 2 2) Point (180 -.000000000000006) Point (180 -33.690067525979835) This has been fixed. ================(Build #2051 - Engineering Case #775493)================ Execution of an ALTER TABLE … ADD COLUMN statement, using defaults that can return NULL on non-empty tables, could have caused data corruption. Two examples of NULL returning functions are user defined functions and GLOBAL AUTOINCREMENT. The newly added column must allow NULL values for this issue to occur. This has been fixed. ================(Build #2051 - Engineering Case #773123)================ When computing a set operation (ST_Union, ST_Intersection, ST_Difference, ST_SymDifference), the value NULL was incorrectly returned if one of the inputs was an empty curve (e.g., 'LineString EMPTY'). This has been fixed. ================(Build #2050 - Engineering Case #767053)================ Applications that attempted to make server side calls in a CLR External Environment would only have worked with .NET 2.0 or 3.5. Applications could use the CLR External Environment with assemblies targeted at .NET 4.0 or 4.5, provided those assemblies did not make server side calls back to the SQL Anywhere server. This problem has now been fixed, and new CLR External Environment executables have now been included for use with .NET 4.0 or 4.5. It should be noted that only one CLR External Environment can be launched per database. Hence applications need to decide prior to starting the CLR External Environment which version of .NET should be used. By default, the server will launch the CLR External Environment that will allow server side calls using either the .NET 2.0 or 3.5 Provider. If an application needs to make server side calls using .NET 4.0; then the following ALTER EXTERNAL ENVIRONMENT statement must be executed prior to starting the CLR External Environment: ALTER EXTERNAL ENVIRONMENT CLR LOCATION 'dbextclr[VER_MAJOR]_v4.0 ' Similarly, if an application needs to make server side calls using .NET 4.5; then the following ALTER EXTERNAL ENVIRONMENT statement must be executed: ALTER EXTERNAL ENVIRONMENT CLR LOCATION 'dbextclr[VER_MAJOR]_v4.5 ' If the application needs to go back to .NET 2.0 or 3.5; then the following ALTER EXTERNAL ENVIRONMENT statement must be executed: ALTER EXTERNAL ENVIRONMENT CLR LOCATION 'dbextclr[VER_MAJOR] ' Note that in each of the above [VER_MAJOR] should NOT be replaced with one of 12 or 16. It is best to keep [VER_MAJOR] as is in order to ensure a smooth transition to a newer version of SQL Anywhere if needed. ================(Build #2049 - Engineering Case #775313)================ In rare cases, the server could have crashed applying the transaction log only on HP-UX systems. This has been fixed. ================(Build #2049 - Engineering Case #774773)================ In very rare, timing dependent cases, if a procedure was being accessed (normally due to being called) at the same time it was being dropped or altered, the caller could have executed the previous definition of the procedure. It was also possible in extremely rare cases to drop a procedure that was already dropped, which could have caused the server to assert it was applying this second incorrect drop operation. This has been fixed so that a caller cannot access the old definition of a procedure, and so that a procedure that is already dropped cannot be dropped again. ================(Build #2048 - Engineering Case #775247)================ The value of the CurrentLineNumber connection property could have been reported incorrectly for a procedural statement (for example, SET or MESSAGE). This has been fixed. ================(Build #2048 - Engineering Case #775148)================ Under certain circumstances, connection_property( 'UtilCmdsPermitted', n ) could have crashed the server. This has been fixed. ================(Build #2048 - Engineering Case #774798)================ In rare, timing dependent cases, the server could have crashed when getting one of the CharSet, NCharCharset, ClientLibrary or Language connection properties for another connection (using connection_property( prop_name, conn_number ). This has been fixed. ================(Build #2048 - Engineering Case #773629)================ When the path for the server command line option -a ("apply named transaction log file") was wrong (the path is relative to the database path, not the server path) the server still seemed to go through recovery even though it did not. This has been fixed. Now server will return an error in such situations. ================(Build #2048 - Engineering Case #764386)================ If an application executed a query against a Microsoft SQL Server proxy table that contained SELECT FIRST or a subquery in an IF EXISTS( … ), then there was a chance the Remote Data Access layer would incorrectly send the SELECT FIRST to the remote server. Note that a similar problem existed with remote Oracle servers as well. These problems have now been fixed and the Remote Data Access layer will now send a TOP 1 instead. ================(Build #2046 - Engineering Case #774462)================ Under rare circumstances, the server could have crashed when describing a result set returned from a stored procedure. This has been fixed. ================(Build #2046 - Engineering Case #774305)================ Execution of the statement DROP <object-type> IF EXISTS <owner>.<object-name> would have returned the error "User ID '%1' does not exist" if <object-type> was a FUNCTION, PROCEDURE, or PUBLICATION, and there was no user named <owner>. This has been fixed. ================(Build #2045 - Engineering Case #774328)================ In rare cases, dropping a temporary procedure could have caused the server to crash. This has been fixed. ================(Build #2044 - Engineering Case #774333)================ Certain values returned by the SNMP agent may have been incorrect. This would only have happened if the values were greater than approximately 2 billion. This has been fixed. ================(Build #2042 - Engineering Case #774178)================ The execution of system procedure sp_auth_sys_role_info and system view SYSUSERPERMS, which uses system procedure sp_auth_sys_role_info, had some server wide synchronization that may have delayed other requests. This has been fixed. ================(Build #2042 - Engineering Case #771780)================ If an application made an external environment call and then subsequently used STOP EXTERNAL ENVIRONMENT or STOP JAVA to shut down the external environment then there was a very small chance the server could have hung or crashed if the external environment crashed at the same time as the stop request. A similarly rare problem could have occurred if the connection was dropped at the same time the external environment crashed. This problem has now been fixed. ================(Build #2038 - Engineering Case #773703)================ Under exceptionally rare circumstances, the server may have leaked an internal connection with the connection name "INT:FlushStats", and therefore could not complete the database and server shutdown when requested. This has now been fixed. ================(Build #2038 - Engineering Case #773683)================ The server may have crashed if a DELETE statement that deleted rows from a local table also referenced proxy tables. This has been fixed. ================(Build #2038 - Engineering Case #773538)================ The graphical plan did not show the value for "Final plan build time" in the "Advanced Details" of the top node under "Global Optimizer Statistics". This has been fixed. ================(Build #2036 - Engineering Case #773420)================ If more than 254 databases were specified on the command line when starting the database server, the server would have quit without giving an error. This has been fixed. A "Too many databases specified: <dbname>" error will now be generated. ================(Build #2036 - Engineering Case #773198)================ Under rare circumstances, the server may have crashed while executing a query with OPENXML functions. This has now been fixed. ================(Build #2030 - Engineering Case #771731)================ If an ALTER failed between START and STOP SYNCHRONIZATION SCHEMA CHANGES there was a possibility that the server would have failed assertion 107101 ("Table lock inconsistency"). This has been fixed. ================(Build #2029 - Engineering Case #772744)================ Applications calling the dbtools function DBLogFileInfo() could have crashed. This has been fixed. ================(Build #2027 - Engineering Case #772523)================ Under exceptionally rare circumstances, the server may have crashed when executing complex statements with proxy tables. This has been fixed. ================(Build #2025 - Engineering Case #771281)================ The server may have appeared to hang while creating a histogram on a NUMERIC column that had an index. This has been fixed. ================(Build #2022 - Engineering Case #771878)================ If a client-side backup was terminated, or failed due a communication error or other unexpected error, database and transaction log file growth could have had poor performance. This has been fixed. Also, if the server was shut down while a backup was in progress, the backup could have reported a protocol error. This has been fixed so that a "Database server not found" error is now returned. ================(Build #2021 - Engineering Case #771105)================ The global variable @@error could have been set incorrectly for the first error encountered in a stored procedure or batch. This has been fixed. ================(Build #2020 - Engineering Case #771690)================ In very rare cases, the server may have crashed while executing the system procedure xp_cmdshell() if the client connection calling the procedure was cancelled. This has been fixed. ================(Build #2020 - Engineering Case #771622)================ The COUNT_BIG() aggregate function is intended to be used in cases where the number of rows is larger than can be represented in an INTEGER. When initially implemented for SQL-level compatibility, COUNT_BIG() was an alias for COUNT(), returning an INTEGER and restricted to the same supported input cardinalities. COUNT_BIG() has been corrected to now return a BIGINT. When executing a parallel query plan with a COUNT_BIG() over a table larger than representable in an INTEGER (approximately 2 billion rows), an error such as “Value SUM() out of range for destination” was returned. This is now corrected. The SUM(x) aggregate function now returns BIGINT if x is of type BIGINT. When using COUNT_BIG() in sliding window queries, the window was re-scanned for each row of the window instead of decrementing the count when rows were removed. This has been corrected. The graphical plan for window operators now displays the PARTITION BY, ORDER BY, and window functions computed by the operator. It also shows whether rows are removed from the functions by inverting the aggregate (e.g., SUM, COUNT, COUNT_BIG) or by rescanning the window (e.g., MIN,MAX). Rescanning the buffer is slower and proportional to the square of the maximum buffer size. The COUNT_BIG aggregate is now supported for incremental maintenance materialized views. When printing COUNT_BIG with no argument, it is now printed as COUNT_BIG(*). If COUNT_BIG() was used in a subquery that was flattened by semantic transformations, it was possible for the function to improperly return NULL instead of 0 (the classic count bug). This has now been corrected. ================(Build #2020 - Engineering Case #771618)================ The server may have crashed when using ARRAY or ROW type values in statements that needed to convert them to strings. This has been fixed. Note, the problem does not happen in DDL and DML statements. ================(Build #2020 - Engineering Case #771542)================ Executing CREATE SUBSCRIPTION or DROP SUBSCRIPTION statements could have failed if they used a subscription-value. This has been fixed. ================(Build #2020 - Engineering Case #771431)================ In specific circumstances, if there were concurrent connections executing the same procedure it was possible for the server to crash. The conditions were rare and timing dependent, and have now been fixed. ================(Build #2018 - Engineering Case #771444)================ JSON services would have returned an empty document if an SQL error occurred. This has been fixed so that an object containing one row with two keys, status: “error”, and message: “<text of the SQL error message>”, is now returned. ================(Build #2018 - Engineering Case #771441)================ Creating a JSON service that called a procedure which set the CharsetConversion option would have caused the server to crash. This has been fixed. ================(Build #2018 - Engineering Case #770430)================ A server thread can go into an infinite loop attempting to update an index, eventually resulting in a server hang. The index in question is not corrupt, the server was just misinterpreting an index key. The is fixed so that the key is now interpreted correctly. ================(Build #2017 - Engineering Case #771288)================ When specifying a certificate for use in a secure web service call, the call could have failed (with a TLS handshake error) if multiple certificates were specified using “certificate=”. For example, this could have happened if using “certificate=xp_read_file( certificate_file )” and the file contained more than one certificate. This has been fixed. ================(Build #2017 - Engineering Case #771044)================ The server may have incorrectly returned the error: 'Function or column reference to '<user-name>' in the ORDER BY clause is invalid', if a select statement used a user-defined function with an owner name in the ORDER BY clause; e.g. ORDER BY <user-name>.<function-name(...). This has been fixed. ================(Build #2017 - Engineering Case #770510)================ Under rare circumstances, the server could have crashed executing query plans involving group-by operators above parallel scans when operating in memory-constrained environments. This has been fixed. The problem can be avoided by disabling parallel query execution (set option PUBLIC.max_query_tasks=1), or made less likely by increasing the amount of memory available to the server. ================(Build #2017 - Engineering Case #769501)================ Query plans involving parallel hash joins running in memory-constrained environments may have failed to return all of the rows from the join. These query plans may also be used by internal operators, such as those that do table validation and foreign key building, causing these statements to also fail. This has now been fixed. The problem can avoided by disabling parallel query execution (set option PUBLIC.max_query_tasks=1), and can be made less likely by increasing the amount of memory available to the server. ================(Build #2017 - Engineering Case #769347)================ The UPDATE and DELETE statements do not support ordinal column numbers in the ORDER BY clause. DELETE statements that bypass the optimizer did not return an error if ordinal column numbers were used in the ORDER BY. This has been fixed. For UPDATE and DELETE statements the SQL reference correctly documents: "You cannot use ordinal column numbers in the ORDER BY clause." But for DELETE statements the syntax must be changed from [ ORDER BY { expression | integer } [ ASC | DESC ], ... ] to [ ORDER BY expression [ ASC | DESC ] , ...] ================(Build #2015 - Engineering Case #770956)================ A query with WITH RECURSIVE and FOR XML/JSON clauses could have returned an error when used in a stored procedure definition. This has been fixed. Note, affected procedures will need to be recreated with the fixed version of the server. ================(Build #2015 - Engineering Case #770496)================ Under exceptionally rare circumstances, the Unload utility (dbunload) would have returned the error "Primary key for table 'sa_unload_stage2' is not unique". For this to have occurred, the unloaded database must have contained foreign keys with very high ids, and the constraint name of the foreign key must have been renamed. This has been fixed. ================(Build #2014 - Engineering Case #768034)================ In rare timing dependent cases, a synchronized mirror server could have failed to take over as primary when the old primary went down. In order for this problem to have occurred the mirror database must have restarted while the primary server was still running. This has been corrected. ================(Build #2013 - Engineering Case #770597)================ The server may have crashed while updating a histogram. This has been fixed. ================(Build #2013 - Engineering Case #770500)================ Under exceptionally rare circumstances, the server may have crashed when executing the function CONNECTION_PROPERTY() for a connection other than the own executors connection, if the queried property was an connection option, but not a connection statistic. This has been fixed. ================(Build #2013 - Engineering Case #768791)================ The server could have hung attempting to schedule a recurring event. This has been fixed. ================(Build #2010 - Engineering Case #727093)================ In extremely rare, timing dependent cases a database could have been corrupted if a server crashed or was terminated during a checkpoint, and then was terminated again during recovery. This has been fixed. ================(Build #2009 - Engineering Case #748349)================ In extremely rare timing dependent cases, it was possible for the transaction log on a copy node to have become corrupted. In order for the potential corruption to have occurred, the copy node needed to have a parent other than the primary/root server, and the parent must have been writing a page to the transaction log at the same time the child copy node was requesting the last page of the parent's transaction. Note that once the copy node has caught up the log operation of the parent, the parent sends pages to the copy node during commit operations. This problem could have only occurred when the copy node is requesting pages from the parent, and not when the parent is sending pages to the copy node. This has been fixed. ================(Build #2007 - Engineering Case #767963)================ Calling the system procedure xp_cmdshell() could have caused the server to shut down on UNIX in rare, timing-dependent events. This has been fixed. There is no known workaround. ================(Build #2006 - Engineering Case #768186)================ The server may have returned poor selectivity estimates for equi-sarg searches on a floating-point type column. Some changes have been made to improve the accuracy of these estimations. ================(Build #2004 - Engineering Case #769515)================ If the system procedure sa_validate() was called with a table or materialized view name only, then only one of the possibly several owner.name objects was validated. For example, given two tables named "Products" as follows: CREATE TABLE FarmEquipment.Products(...); CREATE TABLE HighwayEquipment.Products(...); the following statement would only have validated one of them at random: SELECT * FROM sa_validate( 'Products' ); Even if the user execting the SELECT was the owner of a table called "Products", it may not have been the table that is validated. In other words, the statement above was not equivalent to: VALIDATE TABLE Products; The documentation states that all tables/materialized views matching the specified object name are validated. This problem has been fixed. The work-around is to specify the table/materialized view owner (the second argument). ================(Build #2003 - Engineering Case #769689)================ Queries with predicates of the form: "not exists(subquery)" could have had a sub-optimal execution plan. This has been fixed. The conditions for which this could happen must include: (1) (subquery) was correlated with the main query block (2) (subquery) has a very small size comparing to the table[s] referenced by the correlations ================(Build #2003 - Engineering Case #768882)================ Under rare circumstances, executing a procedure defined with SQL SECURITY INVOKER could have cashed a server crash. This has been fixed. ================(Build #2003 - Engineering Case #768311)================ Under exceptionally rare circumstances, the server may have crashed when running the system function CONNECTION_PROPERTY() for a connection other than the one making the request, if the queried property was a connection option but not a connection statistic. This has been fixed. ================(Build #2000 - Engineering Case #769159)================ When attempting to connect to a secure SMTP server using the system procedure xp_startsmtp(), the connection would not have timed out. This has been fixed. ================(Build #2000 - Engineering Case #769059)================ Wide INSERT statements (i.e., prepared insert statements which insert more than one row at a time) require that each host variable/parameter appear exactly once within VALUES clause and not be nested within an expression. In some cases, a repeat host variable name would not have been detected and the resulting inserted row could have contained an incorrect value within one of the columns. This has now been fixed. Incorrect wide inserts will now return SQL code -155 (Invalid host variable). ================(Build #1994 - Engineering Case #770143)================ The version of OpenSSL now used by the server (as well as all SQL Anywhere products) is 1.0.1i. ================(Build #1992 - Engineering Case #768684)================ The server may have returned the error 'Invalid expression' or crashed, if common table expressions were used in statements with proxy tables. This has now been fixed. ================(Build #1990 - Engineering Case #768466)================ In rare cases, the server would have appeared to not have processed requests for a duration of 30 seconds. For this to have occurred. the auto multiprogramming level adjustment had to have been active and there had to have been many client side connections that were blocked. This has now been fixed. ================(Build #1989 - Engineering Case #768346)================ The server wopuld have incorrectly returned the error "Invalid recursion" for a query that contained proxy tables and the UNNEST construct. This has been fixed. ================(Build #1989 - Engineering Case #767595)================ For same-machine HTTP connections, connection_property(‘ClientNodeAddress’) would have returned an IP address (usually “127.0.0.1” or “::1”). For same-machine connections, this property should return an empty string. This has been fixed. ================(Build #1989 - Engineering Case #764064)================ If a CLR stored procedure attempted to create an SAConnection using the connection string from the SAServerSideConnection object; the server and CLR External Environment would both have crashed. For example, if calling a CLR stored procedure resulted in code similar to the following being executed on the CLR External Environment side: SAConnection local_conn = new SAConnection( SAServerSideConnection.Connection.ConnectionString ); Local_conn.Open() then both the SQL Anywhere Server and the corresponding CLR External Environment would crash. This problem has now been fixed. ================(Build #1989 - Engineering Case #751208)================ An attempt to run the Extraction utility (dbxtract)on a database could have failed with a the error: “cannot perform specified operation, number of administrators for role 'SYS_AUTH_WRITEFILE_ROLE' falls below min_role_admins option value”, if the SYS_AUTH_DBA_ROLE had been migrated. This problem has now been fixed. ================(Build #1987 - Engineering Case #767793)================ In rare, timing dependent cases, when the primary went down and the mirror failed over to become the new primary, it was possible the transaction log on the new primary to contain operations that were never applied to the database. This could have resulted in the database file on the primary being different from the database file on the mirror and any copy nodes. The mirror or copy nodes could have failed with errors or assertions related to applying the transaction log. This issue was possible, but extremely unlikely and never observed, if the synchronization_mode was synchronous. It had been observed if the synchronization mode was asynchronous or asyncfullpage. This has been fixed. ================(Build #1986 - Engineering Case #767780)================ The server could have taken a long time to shutdown if the built-in HTTP server was used during previous executions of any server on the same computer. Other side effects of this problem may also have been seen: - If the sadiags.xml file was large then the merge-to-disk operation that automatically happens at midnight each day may have taken a long time and any other operation that caused a feature to be ‘counted’ would have blocked until the merge was complete. I.e. some operations at midnight may have been seen to ‘hang’ or take longer than usual to complete. - Mobilink servers could also suffer the slow shutdown (or midnight sync) problem if a database server was running on the same computer and that database server had generated a large sadiags.xml file. This has been fixed. A work-around is to delete the sadiags.xml file prior to shutting down the server. This should only be expected to improve the shutdown time if the size of the file is large (e.g. over 100K bytes). ================(Build #1983 - Engineering Case #767873)================ The server could have crashed when performing a recursive union query. This would only have occurred when running the query against a server with a very small buffer pool or many active memory-intensive queries. This has now been fixed. ================(Build #1983 - Engineering Case #767808)================ If the Information utility (dbinfo), or the Validation utility (dbvalid), or the VALIDATE DATABASE statement, was run against a database and a backup of that database was done shortly thereafter, there may have been problems recovering the backup. This has been fixed. ================(Build #1983 - Engineering Case #767799)================ When attempting to create a database with the reserved name ‘utility_db’(a very rare use case), the server would have leaked memory. This has been fixed. ================(Build #1982 - Engineering Case #767365)================ Execution of an ALTER TABLE … ADD with multiple add clauses, each containing a default value, may have caused server assertion failures. This has been fixed. ================(Build #1980 - Engineering Case #767805)================ The server could have crashed when performing a query containing a CUBE or ROLLUP, or that specified grouping sets. This was only possible when the server was operating in a memory-constrained environment. This has been fixed. ================(Build #1980 - Engineering Case #721300)================ Copy nodes which used a partner server name as a parent (as opposed to the primary or mirror), could have failed to connect to this parent. In this case, the initial connection to the parent could have succeeded, but if the parent database or server was restarted, the copy node was unable to connect. This has been fixed. ================(Build #1980 - Engineering Case #681743)================ In rare, timing dependent cases a server which was a copy node and its parent could both have hung for several minutes and then the parent report the following message in the console "Mirroring request timed out: dropping mirroring connection" and both server continue normally. If this did occur, after the message was logged, the copy node would have reconnected and both servers continue normally. In order for this situation to occur, there needed to be requests from the copy node server to the parent server (such as mirroring requests for another database or remote data access requests). This has been fixed so that servers that have less than about 20 committed transactions per second will no longer hang. There may also be slightly improved performance when using copy nodes. ================(Build #1976 - Engineering Case #754970)================ In rare, timing dependent cases, the server could have hung indefinitely if it was renaming the transaction log at the same time as a DDL operation. This has been fixed. ================(Build #1973 - Engineering Case #767121)================ If an application attempted to execute an ALTER EXTERNAL ENVIRONMENT statement on a case sensitive database, then the server would return an external environment not found error if the environment name was specified in mixed or upper case. This problem has now been fixed. ================(Build #1973 - Engineering Case #767050)================ Under rare circumstances, the server could have failed a fatal assertion with a “Dynamic memory exhausted” error. This has been fixed. ================(Build #1973 - Engineering Case #681616)================ The 'asyncfullpage' mirror synchronization_mode performance has been improved to be significantly better than the 'async' synchronization_mode for workloads that can take advantage of it. Note the 'synchronous' synchronization_mode is recommended since both the 'async' and 'asyncfullpage' modes can result in lost transactions if the primary server fails. After changing the synchronization_mode while the mirror server was connected, the mirror could have failed to take over as primary (including if the synchronization_mode was changed from 'async' or 'asyncfullpage' to 'synchronous'). This has been fixed. A side effect of this fix is the database on the mirror server automatically stops and restarts if the synchronization_mode is changed between an asynchronous mode and 'synchronous'. If the mirror database stopped and immediately restarted, in rare timing dependent cases, it was possible that the mirror would fail to take over as the primary if the primary failed. This has been fixed. ================(Build #1972 - Engineering Case #767046)================ If the Data Source utility (dbdsn) was used to create, modify, or delete a DSN, and the Driver connection parameter was used, the driver name would have needed to match the installed driver name exactly. This has been fixed so that the name is now case-insensitive. ================(Build #1969 - Engineering Case #751211)================ If there were many inserts, updates or deletes on a primary server, then checkpoints on the mirror or copy node could have been slow. In some cases, the primary's performance could be significantly impacted by slow checkpoints on the mirror. This has been fixed so that the mirror or copy node checkpoint performance has increased. ================(Build #1959 - Engineering Case #766303)================ The server usage text should display the switches in alphabetical order. "-sbx" was not placed properly. This has been fixed. ================(Build #1959 - Engineering Case #764810)================ On Linux and Mac OSX platforms, the server was not accepting denormal double values in INSERT statements. This has been fixed. ================(Build #1958 - Engineering Case #666123)================ The server did not respond to a cancel during execution of a REGEXP or LIKE predicate if the first operand was a very long constant expression. This has been fixed. ================(Build #1955 - Engineering Case #737027)================ If in a query block, two equal correlation names referred to exactly the same view or base table, then the server could have merged them and performed a query rewrite. The server incorrectly rewrote a query if one of the two table expressions aliased with the same correlation name was a derived table, a procedure name, an openstring expression, a contains expression, or a DML derived table. As a result, the server may have returned an incorrect result set. This has been fixed. ================(Build #1954 - Engineering Case #765993)================ The server allows one extra connection beyond the connection limit as long as that connection has the DROP CONNECTION privilege (or DBA authority prior to version 16). This is to allow that connection to drop other connections if all connections to a server become blocked. However, once this extra connection was made, the server would not have allowed new connections until, (a) that extra connection went away, or (b) two other database connections went away. This has been fixed – (b) above has been changed to “one other database connection goes away”. ================(Build #1953 - Engineering Case #765979)================ Calls to connection_property( 'TempFilePages' ) could have incorrectly returned values greater than 2 billion when 0 should have been returned. This has been fixed. ================(Build #1952 - Engineering Case #758580)================ When a reusable plan for a statement in a stored procedure was executed from the plan cache, in certain cases the reused plan failed to obtain schema locks on all of the tables occurring in the statement. As a result, another connection was not blocked from performing concurrent DDL on a table in use by the cached plan. This scenario could have caused data corruption that could lead to a non-recoverable database. This has been fixed. ================(Build #1951 - Engineering Case #764973)================ In rare, timing dependent cases, the server could have failed assertion 101201 - "Deferred growth not suspended for checkpoint" after a cancelled backup which did a log rename or truncate. This has been fixed. ================(Build #1946 - Engineering Case #765443)================ Reading beyond the end of a file (e.g., for a LOAD TABLE statement from a 0-length file) on Linux when the file is on a remote NFS drive may cause the server to crash. This will only happen if O_DIRECT is enabled over NFS in Linux kernel versions 3.5, 3.6, and 3.7. Details of the kernel bug can be found here: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/fs/nfs/direct.c?id=67fad106a219e083c91c79695bd1807dde1bf7b9 This has been fixed. Direct I/O will no longer be used for remote file systems on these Linux versions. Upgrading the Linux kernel to at least 3.8 is the only workaround. ================(Build #1945 - Engineering Case #765566)================ Certain syntactically-invalid queries using SELECT FROM DML constructs could have caused the server to crash. This has been fixed. ================(Build #1945 - Engineering Case #765031)================ A polygon in a Round Earth SRS could have failed to be represented correctly if it contained an edge on the equator going east to west and crossing latitude 0. This has been fixed. ================(Build #1942 - Engineering Case #765363)================ In timing dependent cases, the server could have crashed, or otherwise failed, if a backup was cancelled. This has been fixed. ================(Build #1941 - Engineering Case #761197)================ The database server may have crashed if any one dbspace file reached the maximum size of 0x10000000 pages. If a single dbspace needs more space and cannot be split into multiple dbspaces, a larger page size is required. For example, the dbspace size limit for a 2K page size is 512GB while a 4K page size may create dbspaces up to 1TB in size. When a single file gets too big, the server will now detect this situation, version 11 and 12 servers will display a failed assertion message and shut down. Version 16 and later servers will display a failed assertion message and shut down only the affected database. ================(Build #1938 - Engineering Case #764849)================ If a very long string was unloaded from a database and was loaded into a second database with a different character set, the LOAD TABLE statement may have returned the SQL error: -1313, “Maximum string length exceeded.” This was only likely to happen if the length of the string was within a page size of the string limit of 2GB-1. This has been fixed. ================(Build #1933 - Engineering Case #765821)================ Server could have hang while executing a stored procedure that invoked other stored procedures. This has been fixed. ================(Build #1926 - Engineering Case #764620)================ The system procedure sa_certificate_info() was returning binary data for the Serial Number field of a certificate. This has been fixed so that it now returns a hexadecimal representation of the serial number. ================(Build #1925 - Engineering Case #764411)================ If the first line of the HTTP response from an outbound client HTTP request was split across two or more packets then the server would have failed to accept the response from the remote server. This has been fixed. ================(Build #1924 - Engineering Case #764425)================ In some cases the server cache would not shrink. This would have happened after a drop in server activity that prompted rapid cache growth. The rapid cache growth had to have happened because of an allocation of pages for heap or temporary table pages. Although the cache was not able to shrink, it would have been reused. This has now been fixed. ================(Build #1918 - Engineering Case #764055)================ Operations that compute expressions with string or NUMERIC types now have slightly improved performance. ================(Build #1917 - Engineering Case #764062)================ On servers where the number of connections executing at a lower priority level exceeded the number of connections executing at a higher priority level, more CPU cycles would have been given to the lower priority tasks. This has been fixed. ================(Build #1915 - Engineering Case #763851)================ Fetching the ServerNodeAddress connection property may have returned incorrect values for HTTP connections. For example, it may have returned an empty string for remote connections, or it may have returned a valid IP address for a local connection. This has been fixed. ================(Build #1915 - Engineering Case #666731)================ Under rare circumstances, the server could have hung when values for WHERE or SUBSCRIBE BY clauses were being changed in a table with an article. This has been fixed. This fix covers additional cases that were not covered by Engineering case 654952. ================(Build #1914 - Engineering Case #763803)================ Loading the following polygon: POLYGON((-1 -1, -1 1, 0 0, 1 1, 1 -1, -1 -1)) would have resulted in an invalid polygon error. This has been fixed. There is no known workaround. ================(Build #1911 - Engineering Case #763621)================ If a web service of type ‘JSON’ returned a floating-point number between -1 and 1, it would not have included a leading zero (eg. It would be returned “.5” rather than “0.5”). The JSON standard requires the leading zero. This has been corrected. ================(Build #1910 - Engineering Case #763545)================ SET MIRROR OPTION auto_failover = 'On' was incorrectly being treated as 'Off'. As a workaround, SET MIRROR OPTION auto_failover = 'Yes' can be used. This has been fixed so that both 'On' and 'Yes' are now supported. ================(Build #1910 - Engineering Case #762317)================ If a computer supported IPv6, but had IPv4 disabled, the database server would have failed to start if TCP/IP or HTTP were being used. In addition, a SQL Anywhere client application running on such a machine could have failed to find servers using TCP/IP. These problems have now been fixed. ================(Build #1905 - Engineering Case #763241)================ In rare cases, the database server may have crashed while request level logging was turned on. This has been fixed. ================(Build #1904 - Engineering Case #761990)================ If the cardinality estimate of the left hand side of a JOIN EXISTS was less than 1, a NestedLoopsSemijoin (JNLS) would always have be chosen based on cost. This could have led to poor performance in some cases. A more robust optimization has now been implemented. ================(Build #1903 - Engineering Case #763156)================ An unlikely security vulnerability in the server has been fixed. ================(Build #1901 - Engineering Case #763031)================ In very rare, timing dependent cases, starting a server with a mirrored database could have failed with the server failing assertion 101426. This has been fixed. ================(Build #1900 - Engineering Case #748083)================ The server may have crashed when using ARRAY or ROW type values in string operations. This has been fixed. ================(Build #1900 - Engineering Case #671957)================ The server may have returned an incorrect result set when a query had a grouped subquery in the null-supplying side of an outer join, and all the following conditions were true: - all group by expressions in the subquery were constants - the subquery had a COUNT aggregate - the subquery did not affect any table rows - the outer join condition referenced a constant grouping column of the subquery For example: If table T1 has no row with a1 < 44 then the query does not return rows in any circumstances. select * from T2 left outer join ( select 111 c1, count(*) c2 from T1 where a1 < 44 ) V on T2.a2 = V.c1 This has been fixed. ================(Build #1897 - Engineering Case #762878)================ If a CREATE OR REPLACE PROCEDURE/FUNCTION statement was executed on the primary server, and user connections needed to be dropped to apply this statement on a mirror/copy node, the procedure or function was not replaced on the mirror/copy node and no error was reported. This has been fixed. Note that CREATE PROCEDURE/FUNCTION and ALTER PROCEDURE/FUNCTION statements were not affected. ================(Build #1897 - Engineering Case #762616)================ Under very rare circumstances, executing a query with a cached parallel plan could have crashed the server. This has been fixed. ================(Build #1895 - Engineering Case #762695)================ If an attempt to cache a plan for a procedure statement caused execution of a poor quality reusable plan, this behavior would repeat with a fixed period of executions. This has been fixed. ================(Build #1886 - Engineering Case #761748)================ If the transaction log was renamed while the MobiLink client (dbmlsync) was performing a sync operation, and no transactions were in progress, the SQL Anywhere server would have generated the new log in a way that would have caused dbmlsync to send downloaded rows for upload. This problem has been fixed ================(Build #1882 - Engineering Case #761652)================ In rare cases the server may have returned SQLCODE -150 or failed assertion 106104 when optimizing an aggregation query that qualified for optimizer bypass. This could have occurred when the aggregate function was within the left-hand side of a dot-notation function expression (primarily occurring within spatial queries), such as ST_Geometry::ST_EnvelopeAggr(column).ST_AsWKB(). This has been fixed. ================(Build #1882 - Engineering Case #761039)================ Under exceptionally rare circumstances, the server may have crashed when executing UPDATE or DELETE statements that bypass the optimizer if the number of columns of the table was evenly divisible by 32, and all the column values were needed in the statement and an index had been chosen. This has been fixed. ================(Build #1881 - Engineering Case #761500)================ The system procedure sa_recompile_views may have returned an unexpected error if it was executed outside the reload.sql script with non-standard settings for the database options quoted_identifiers and ansi_close_cursors_on_rollback. This has been fixed. ================(Build #1878 - Engineering Case #761107)================ The server may have returned an incorrect result set if an OrderedGroupBy strategy used an index and the GroupBy query block contained an outer reference after the query rewrite. This has been fixed. ================(Build #1872 - Engineering Case #760057)================ Some LDAP servers allowed a zero length password to unexpectedly allow a user to successfully authenticate. This has been corrected. ================(Build #1872 - Engineering Case #727737)================ Some clauses of the CREATE SPATIAL REFERENCE SYSTEM statement were incorrectly recorded in the transaction log. This has been fixed ================(Build #1868 - Engineering Case #759431)================ The search condition "<expression> IS [ NOT ] ( type-name, ... )" may have incorrectly evaluated to FALSE, or have caused a server crash. This has been fixed. ================(Build #1867 - Engineering Case #760874)================ If the system procedure sa_server_option() was used to change the 'IdleTimeout' or 'LivenessTimeout' option values to less than 0 or more than 32767, connections may have been incorrectly dropped. This has been corrected so that an error is now given if a value less than 0 or more than 32767 is specified for these options. In both cases, a value of 0 means there is no timeout. ================(Build #1865 - Engineering Case #760526)================ Under rare circumstances, the server could have crashed when checking whether a stored procedure could be in-lined. This has been fixed. ================(Build #1865 - Engineering Case #759209)================ The server may have loaded too large a value, and NaN/INF values, into DOUBLE, FLOAT and REAL columns by LOAD TABLE and OPENSTRING. This has been fixed. The server will now generate an error for these values. ================(Build #1865 - Engineering Case #758543)================ The server could have become deadlocked when attempting to drop a user with a table in-use. This has been fixed. ================(Build #1859 - Engineering Case #759525)================ If a web procedure URL was badly formed, the server could have crashed when the procedure was called. This has been fixed. ================(Build #1858 - Engineering Case #760144)================ When performing an absolute or relative fetch, the SELECT list expressions for rows that are not returned were evaluated. These were not strictly needed and if they contained expensive operations such as UDFs this could have been slower than necessary. If the Row_counts option was TRUE and the query optimizer could not accurately estimate the number of rows to be returned, the server simulates an ABSOLUTE fetch to position 1000000000 in order to count the rows. Before this change, all select list expressions were evaluated for these rows. These unneeded evaluations are now skipped. A change in behaviour is that side effects and errors are no longer observed in some cases. There may still be more executions than client fetches due to factors such as prefetching. ================(Build #1858 - Engineering Case #758422)================ When starting the database server with a minimum multiprogramming level (command line option "-gnl <value>") that was higher than the default maximum, the database server would not have adjusted the maximum setting. The number of threads would never have gone higher than the maximum. This has been fixed. If the minimum (-gnl) value exceeds the maximum value, then the maximum and initial MultiProgrammingLevel settings will be readjusted to this minimum. ================(Build #1857 - Engineering Case #760128)================ In certain cases, a user that had the appropriate system privilege to create a procedure, function, view, trigger, or sequence, but did not have the appropriate privilege to alter it, was still able to alter the object. This has been fixed. ================(Build #1857 - Engineering Case #760022)================ If auditing was on and the database contained events that were executed, the output from dbtran -g could have contained lines like the following: --CONNECT-1025-0000952657-failure-2014-03-18 09:36 indicating a possible connection failure. This has been fixed. These lines do not indicate a connection failure. They can be safely ignored and will not show up in transaction logs created with fixed servers. ================(Build #1853 - Engineering Case #759618)================ A failed ALTER TABLE statement could have corrupted the database. For corruption to occur, the table being altered must have contained no data, but must have contained some at some point. The error most likely involved was “column not found”. Assertion failures possible included 201135 (page freed twice), 201503 (Record X not present on page Y), 200106 (attempting to add row twice), 200131 (Invalid page found in index), 106200 (Unable to undo index changes during rollback), 100700 (Unable to find table definition X for record referenced in rollback log), or 101422 (Attempt to write an invalid page). This has been fixed. ================(Build #1852 - Engineering Case #758332)================ For certain geometries, ST_Buffer may report a ring not closed error. This has been fixed. ================(Build #1850 - Engineering Case #757959)================ Under exceptionally rare circumstances, the server may have returned too many rows from a parallel hash semi-join if the build site is the preserved table. This has been fixed. ================(Build #1849 - Engineering Case #756777)================ In rare cases, the server may have crashed when executing an invalid plan that contained an equi-join to a correlated subquery. This has been fixed. ================(Build #1846 - Engineering Case #748710)================ Under some circumstances, the server could have crashed when executing a procedure with NO RESULT SET clause. This has been fixed. ================(Build #1844 - Engineering Case #758811)================ In rare, timing dependent cases, if a connection was running a procedure and a DDL operation occurred, the server could have crashed. In order for the crash to have occurred, the procedure needed to have a statement that referenced a view or multiple tables. This has been fixed. In rare, timing dependent cases, if multiple connections were running the same procedure concurrently and a DDL operation occurred, the server could have hung. In order for the hang to have occurred, the procedure needed to have had multiple statements that referenced the same view. This has also been fixed. ================(Build #1843 - Engineering Case #759129)================ If a Java stored procedure was defined such that the number of OUT parameters in the stored procedure definition was not equal to the number of OUT parameters in the Java signature, then the server would have returned an ArrayOutOfBounds exception when the procedure was called. This problem was introduced by the fixes for Engineering case 691193, and has now been fixed. ================(Build #1843 - Engineering Case #759114)================ Attempting to fetch the date 0000-01-00 as either a java.sql.Timestamp or java.sql.Date using the SQL Anywhere JDBC Driver would have resulted in the wrong value being returned and all subsequent timestamp values being incorrect. This date is not representable within Java, hence the value now returned will be 0001-01-01 and all subsequent timestamp values will now be correct. Note that this change is most significant when querying the mv_known_stale_at column from SYS.SYSVIEW since 0000-01-00 represents a materialized view that is either fresh or in an unknown state. The value will now be returned as 0001-01-01 instead. ================(Build #1839 - Engineering Case #758899)================ Spatial operations may, in rare cases, have crashed the server. This has been fixed. ================(Build #1839 - Engineering Case #751771)================ A user defined function with a SELECT statement containing a common table expression could have been incorrectly inlined. This has been fixed. ================(Build #1838 - Engineering Case #758063)================ When the system procedure sa_server_messages() was called from services, in such a way that it returned an empty result set, it could have put the server in a state where further concurrent calls to sa_server_messages() could have caused a server crash. This has been fixed. ================(Build #1838 - Engineering Case #750691)================ In rare cases, appending or prepending data to the value in a compressed column could have resulted in a server hang. This has been fixed. ================(Build #1837 - Engineering Case #758564)================ LineStrings in Round Earth SRSs could have been represented incorrectly. This could have happened for LineStrings with segments crossing equator from South to North. Additionally, under very rare circumstances, such LineStrings could have caused a stack overflow exception. This has been fixed. In the existing databases, LineStrings that cross equator from South to North and are stored in a Round Earth SRS (for example, 4326) should be reloaded in order to be stored in the correct representation. ================(Build #1835 - Engineering Case #756394)================ Under exceptional rare circumstances, the server may have crashed when trying to reuse a cached query plan if tables used in the query were dropped and recreated. This has been fixed. ================(Build #1830 - Engineering Case #754724)================ Under exceptional rare circumstances, the server may have crashed when receiving a host variable value on a TDS communication based connection failed. This has been fixed. ================(Build #1829 - Engineering Case #757009)================ When the server executed an ALTER TABLE statement, or loaded the table definition of a table with a unique index (primary key, table/column unique constraint or unique index) it did not derive the unique property to non-unique declared indexes that were defined on a superset of the columns of a unique index. For example: If a table has a primary key on columns (col1, col2) and there is another index on columns (col2, col3, col1) then this other index is unique as well. The optimizer relies very much on this unique property of an index for cardinality estimation. So the server may have used an non-optimal plan if above conditions were true. This has now been fixed. ================(Build #1829 - Engineering Case #756681)================ In rare cases, the server could have returned an error like: "Table '_^_^_22072007_^_^_759_^_^_22072007_^_^_' not found" for an internal temporary table if an INSERT, UPDATE, or DELETE statement triggered the change of an immediate materialized view. The problem only happened when the very first INSERT, UPDATE, or DELETE of the table in a connection was executed in a nested block, for example a trigger or procedure. In this case subsequent DML operations on the table may then have returned the above error. This has been fixed. ================(Build #1825 - Engineering Case #757492)================ When a multibyte character string with a truncated last character was unloaded, the last two bytes of a character may be added to the end of the string as ASCII characters. This has been fixed. ================(Build #1823 - Engineering Case #753195)================ Some compiler options have been changed in this build that can result in performance improvements in the SQL Anywhere server in certain situations. ================(Build #1823 - Engineering Case #662248)================ In rare, timing and data-dependent cases, the server could have hung or crashed during execution of parallel query plans. This has now been fixed. ================(Build #1822 - Engineering Case #757274)================ The system procedure sp_parse_json() would have failed to parse escaped character sequences in strings. In particular, it would not have parsed \” (escape double quote), nor any escaped character near the end of the string. For example, this statement would not have handled the \” sequences and would have generated an error: call sp_parse_json( ‘myvar’, ‘[{s:”this is \”quoted text\”.”}]’ ); This has been fixed. ================(Build #1822 - Engineering Case #756886)================ Mirroring servers now log more detailed information to the console log in the following cases: - when failover occurs and the mirror server takes over as the primary "Now running as primary server" is now logged. Previously, "Database <name> (<file>) started at <date/time>" was logged, which was incorrect because the database was already started and was not restarted. - if the mirror server was behind applying operations and caused the primary to block for more than 10 seconds, "primary blocked for <number> seconds waiting for the mirror to catch up" is logged. - transaction log start and current offsets are now displayed when starting up a database and at other transitions (such as when the transaction log is restarted). - if there was an unexpected error that caused applying the transaction log to fail, a message is displayed and the database is not automatically restarted ================(Build #1818 - Engineering Case #757146)================ Under rare circumstances, the server could have crash when executing an ALTER MATERIALIZED VIEW … IMMEDIATE REFRESH statement. This has been fixed. ================(Build #1817 - Engineering Case #756960)================ If starting a database on a server with the “-udf abort” option failed due to an assertion failure, trying to start another database with the same name would have returned a "database name is not unique" error. This has been fixed. ================(Build #1817 - Engineering Case #750513)================ The server could have crashed if a TDS based connection sent a cancel request and a cursor close request at the same time for the same connection. Note that the cancel request could have either been sent explicitly by the application or implicitly by the underlying driver due to a query timeout. This problem has now been fixed. ================(Build #1814 - Engineering Case #756801)================ The builtin SQL functions HTTP_BODY, HTTP_HEADER and HTTP_RESPONSE_HEADER were returning an empty string when the required value did not exist, instead of NULL as documented. This has been corrected so that they now return NULL when the required value does not exist. As well, the builtin function HTTP_RESPONSE_HEADER has been corrected to recognize the special header name ‘@HttpStatus’ and return the status code if given ‘@HttpStatus’. ================(Build #1814 - Engineering Case #748976)================ If a database had one or more of the following public options set differently: - ansinull=On - conversion_error=On - divide_by_zero_error=On - sort_collation=Internal - string_rtruncation=On , and the specified value was then set as a temporary option for the connection creating a materialized view, recovery of the CREATE MATERIALIZED VIEW statement would have failed. This has been fixed. Note that the fix applies only to the CREATE statement executed with the fixed version of the server. ================(Build #1810 - Engineering Case #756028)================ Under rare circumstances, a server that was performing request level logging could have crashed when executing stored procedure code. This has been fixed. ================(Build #1808 - Engineering Case #756118)================ The server may have incorrectly returned the non-fatal assertion error 106104 "Field unexpected during compilation" for an IN list predicate, if the IN list contained expressions with column references for which the value was unknown at open time. This has been fixed. ================(Build #1808 - Engineering Case #692981)================ Within stored procedure code, ARGN and some other builtins, IF expressions, conjunction or disjunction of predicates, could have eagerly evaluated all of the subselects in subexpressions. For example, expression ARGN( 1, (select 1/max(v) from t1), (select 1/min(k) from t2), (select 1/0 from dummy) ) would have evaluated all of the subselects (and returned an error) before noting that only the first of the subselects needed to be evaluated, and no error returned. This has been fixed. NOTE: The evaluation of subselects in procedural expressions now matches the evaluation in queries. For disjunctions and conjunctions, the order of evaluation of predicates is not guaranteed. ================(Build #1803 - Engineering Case #756032)================ If two subscribed publication articles contain two of the same tables and both contained the same list of columns, when adding a synchronization subscription to the second publication the database server would have erroneously reported the error: SQLCODE -1325: Column subset for table '%1' in publication '%2' does not match that specified in publication '%3' This has been fixed. ================(Build #1803 - Engineering Case #755767)================ Iif the OUTPUT statement was attempting to write a string which contained characters whose Unicode representation was greater than U+FFFF (known as "supplementary characters"), the statement would have failed with the message "Could not save result set. Input length = 1" This has been fixed. ================(Build #1802 - Engineering Case #755524)================ Under rare circumstances, queries using hash filters could have caused the server to crash. This was more likely in environments with a heavy load, and/or cursors held open for long periods of time. This has been fixed. ================(Build #1800 - Engineering Case #755676)================ In very rare, timing dependent cases, establishing a connection with a non-default maximum packet size connection parameter could have caused a loaded server to crash. The default maximum packet size for a server can be specified using the -p server option. A client application can request a different packet size using the CBSize connection parameter. Mirroring servers containing Engineering case 750502 change would also have used a non-default maximum packet size. This has now been fixed. ================(Build #1795 - Engineering Case #753193)================ If a database server was running multiple databases with mirroring enabled, and one of the databases incorrectly used a primary or mirror alternate server name that was already in use by another database, connections to the other database that used the alternate server name could have failed. This has been fixed so that if one database starts using a particular alternate server name, that alternate server name will not be removed if a second database attempts to use it. ================(Build #1793 - Engineering Case #755077)================ The following query would have resulted in a polygon with points not snapped to grid: new ST_Point( 2, 2.5 ).ST_Buffer( 1.1 ) Trying to use the resulting CurvePolygon in an operation may have caused unpredictable results. This has been fixed. ================(Build #1793 - Engineering Case #753721)================ The row-level update triggers and referential update actions may have incorrectly fired for unchanged columns if the UPDATE statement was executed by SQL Remote and caused an update conflict. This has been fixed. ================(Build #1793 - Engineering Case #753719)================ Zero-length linestrings were being treated as invalid. For example, selecting ST_Geometry::ST_GeomFromText( 'LineString( 1 1, 1 1 )' ) would have resulted in an error indicating that linestrings must have at least two points. The OGC standard does not forbid zero-length linestrings, so this geometry should be accepted. This has been fixed. ================(Build #1781 - Engineering Case #753615)================ In very rare, timing dependent cases, the server could have crashed when starting or stopping a database, failing on a "MESSAGE ... TO CLIENT FOR ALL" statement. This has been fixed. ================(Build #1769 - Engineering Case #752967)================ The server may have returned an incorrect result set for a complex query with multiple nested derived tables or views, if there was an equality predicate that could have been pushed inside the nested derived tables and the outside derived table could have been flatted. If the problem happened, the equality predicate was not executed and the result set contains additional incorrect result rows. For example, in the below query the predicate "v2.c = 0" can be pushed inside the derived table "v2" and derived table "v2" can be flatted. Predicate is not executed in this case. select * from ( select distinct * from ( select a, sum(b) c from T1 group by a ) v1 ) v2 where v2.c = 0 This has been fixed. ================(Build #1765 - Engineering Case #753279)================ Execution of a DROP statement with an IF EXISTS clause for a table, view or materialized view, incorrectly returned an error if the object of the requested object type did not exist but there was an object with the same name but different object type. For example: there exists a table "Products" then a "DROP VIEW IF EXISTS Products" returns the error "Table 'Products' not found". This has been fixed. ================(Build #1762 - Engineering Case #752025)================ If the server was running on a Windows machine which was suspended, the server could have hung once the machine was resumed. This has been fixed. ================(Build #1761 - Engineering Case #751684)================ If a domain was dropped and re-created within a batch, and the batch also included a subsequent reference to the domain in a CREATE TABLE or ALTER TABLE statement, the old (before drop & create) domain was used for the column definitions. This has now been corrected. ================(Build #1760 - Engineering Case #753080)================ When a column inside a row was referenced in the GROUP BY list, the column could not be found when it was referenced through the row. For example, consider the following query: select (B.gencol1).id as gencol1 from ( select ROW(A.id) as gencol1 from Product A group by A.id) B order by gencol1 B.gencol1 is a row with one column named id in it. (B.gencol1).id should have been able to reference to that column. In fact, it can reference it when “group by A.id” is removed. However, when the GROUP BY was present, an error was returned: “column ‘id’ not found in variable (B.gencol1)”. This has now been corrected. ================(Build #1760 - Engineering Case #752869)================ In very rare timing dependent cases, if the arbiter for a running High Availability system was changed to a different arbiter server, and then changed back to the original arbiter server later, the arbiter may have contained incorrect state information. If this occurred, there was a small chance that the incorrect partner could have become the primary. This has been fixed. ================(Build #1758 - Engineering Case #693422)================ The default value specified in a CREATE [ OR REPLACE ] VARIABLE statement within a procedure, function, or trigger would have been ignored. Example: create or replace procedure foo () begin create or replace variable @v int = 123; end; call foo(); select @v; -- should return 123 but would have returned NULL This has been fixed. As a workaround set the default or initial value for the variable in a separate statement after the variable has been created. ================(Build #1755 - Engineering Case #752641)================ If a server was started with one database on the command line with mirroring enabled, and then other databases were started on the server, the server could have stopped incorrectly if the mirrored database failed to start when the database was restarted. Note that mirrored databases are automatically restarted by the server if they lose quorum, as well as other cases. This has been fixed so that if the mirrored database fails to start in this case, only the mirrored database will be stopped, and the server will remain running if there are other databases running. Also, if a server to server mirror connection failed due to a liveness timeout, no message was logged even if -z was enabled. In addition, other server to server connections contained little diagnostic information even if -z diagnostic logged was enabled. This has been fixed so that liveness timeout messages for mirror server connection are logged with or without -z. In addition, the messages "mirror server connection closed by other side" and "connection terminated abnormally; error code <number>" will be logged if -z is used and a mirror server connection is closed by the remote server or fails due to a network error. ================(Build #1755 - Engineering Case #752201)================ If a procedure was defined with the special values SQLCODE or SQLSTATE as parameters, and the procedure was used in the FROM clause of SELECT statement, it was possible that the server could have crashed. This has been fixed. ================(Build #1753 - Engineering Case #752428)================ In rare timing dependent cases, if a connection between a copy node and its parent was lost, it was possible for the new connection to have repetitively connected and disconnected. Normally this condition would have resolved itself within a minute or so, but in some cases it could have continued indefinitely. This has been fixed. If an ALTER MIRROR SERVER statement was used to add an alternate parent to a running and connected copy node, the alternate parent would have been used until the copy node restarted. This has been fixed so the alternate parent can be used the next time the copy node loses its connection to its parent. ================(Build #1752 - Engineering Case #752612)================ When fetching a Date, Time, or Timestamp from the database as a string in a client, the fetch was much slower than if the data was bound to be fetched as a timestamp type or structure. This performance has been improved. Also, when fetching a Date, Time, or Timestamp, or other types that are not native strings or numerics, from the database when the clients character set is different from the database character set, the data would have been returned in the database character set. This has been fixed so the data will returned to the client in the client's character set. ================(Build #1751 - Engineering Case #752203)================ If a certificate had an expiry date in the month of December, calling sa_certificate_info on that certificate could have crashed the server. This has been fixed. ================(Build #1747 - Engineering Case #751937)================ In rare cases, a copy node could have started but never have written or applied any changes that the primary or root had made until the copy node was restarted or the connection to its parent was lost. All of the following conditions must have been met for this problem to have been possible: - the database was backed up from the primary or root using dbbackup or the BACKUP statement - while dbbackup was running, there must have been other connections to the primary or root that had modified the database - between the backup and when the copy node was started on the backed up database, there was no commit done on the primary or root If this problem was occurring, the sa_mirror_server_status row for the copy node would show the copy node was connected with a recent last_updated value, but the log_written and log_applied values would not change. If the copy node was restarted, it would start applying and writing changes. This has been fixed. ================(Build #1746 - Engineering Case #751948)================ In rare timing dependent cases, a Linux or Unix server using TLS mirroring or diagnostic connections could hang. This has been fixed. ================(Build #1746 - Engineering Case #751941)================ If a database server received an HTTP POST request with a large payload, the server could have crashed. This has been fixed. ================(Build #1744 - Engineering Case #751752)================ A secure web procedure call through a proxy server could have failed with error code -988, “Invalid response from the HTTP server” if the web server or proxy attempted to redirect the call with a “301 Moved Permanently” status code. This has been fixed. ================(Build #1740 - Engineering Case #751585)================ If a database server attempted to send more than 16k of data over an HTTPS connection at one time, the client side of the connection could have hung. With version 16.x, this could also have happened if a TLS connection used a packet size bigger than 16000 bytes. This has been fixed. ================(Build #1738 - Engineering Case #751386)================ There were a number of problems with the CREATE ENCRYPTED/DECRYPTED DATABASE statements with respect to dbspace files with relative pathnames: - If a database contained a dbspace with a relative pathname, and the server was not started in the same directory as that file, the CREATE ENCRYPTED/DECRYPTED DATABASE statements would not have been able to find the dbspace file - In the above case, the error message would have said “Output file cannot be written” even though it was an input file causing the problem. The error message had no indication of which file caused the problem - If the target filename for these statements was in a different directory, the encrypted or decrypted dbspace files would have remained in the original directory. For example, if a database had a dbspace with the filename “dbspace.dbs” and the following was executed: create encrypted database ‘new/sales.db’ from ‘sales.db’ key ‘myencryptionkey’ the server would have created the new dbspace.dbsE file in the same directory as dbspace.dbs, rather than in the “new” subdirectory. These problems have now been fixed. ================(Build #1738 - Engineering Case #751374)================ Under rare circumstances, a very long line segment defined in a Round Earth SRS (for example, SRS 4326) could have crashed the server. This has been fixed. ================(Build #1737 - Engineering Case #751285)================ Some complex statements took a long time to open a cursor. For some classes, opening the cursor could not have been interrupted by cancelling the statement, nor could the server have been stopped until the open completed. One such instance has been corrected and the open will now respond properly to a cancel. ================(Build #1734 - Engineering Case #751213)================ If multiple connections were using secure LDAPUA simultaneously while the public.trusted_certificates_file option was being changed, the server could have crashed. This has been fixed. ================(Build #1734 - Engineering Case #750502)================ Mirror server connections were ignoring the CommBufferSize (CBSIZE) connection parameter. This has been fixed so that CBSIZE can now be used to specify the maximum size of communication packets between mirror servers. In addition, transferring many log pages between mirror servers may have been slower than necessary. The default maximum packet size used for connections between mirror servers has been increased, which can improve performance in some cases. 16.0.0 mirror server connections now default to a maximum packet size of 64240 bytes, and 12.0.1 mirror server connections now default to a maximum packet size of 16000 bytes. ================(Build #1734 - Engineering Case #750363)================ If using FIPS, the SQL Anywhere FIPS DLL (dbfips16.dll) does not have to be in the path but the actual FIPS module DLLs (libeay32.dll, ssleay32.dll on Windows, libcrypto.so, libssl.so on Unix) do. On unix, they need to be in the LD_LIBRARY_PATH. If they were not found, the error message that results would have daid that the dbfips DLL could not be found. This has been fixed. The correct filename for the missing file will now be given, and the error message will indicate whether the file could not be found at all or if it could not be found in the path. ================(Build #1733 - Engineering Case #750992)================ The server may have crashed when attempting to execute a certain class of incorrect queries if the select list contained alias names and aggregate functions. This has been fixed. ================(Build #1727 - Engineering Case #740548)================ Information returned by the system procedure sa_mirror_server_status() was not updated for the hour after a daylight savings time change that changed the local time to be an hour earlier. The sa_mirror_server_status row corresponding to the server that was running the sa_mirror_server_status query was note affected. This has been fixed. ================(Build #1726 - Engineering Case #744047)================ In some cases, a server with a database containing procedures with nested row parameters could have crashed. This has been fixed. ================(Build #1726 - Engineering Case #743469)================ Setting the element of an array in a procedure could have caused the server to crash under some circumstances. This has been fixed. ================(Build #1724 - Engineering Case #654952)================ Under rare circumstances, the server could have hung when SUBSCRIBE BY values were being changed for an article while large numbers of connections were updating tables in the database. This has now been fixed. A workaround is to avoid changing SUBSCRIBE BY values simultaneously with connections performing INSERT, UPDATE or DELETE operations. ================(Build #1723 - Engineering Case #750298)================ In very rare, timing dependent cases, a copy node with children could have hung when it was reconnecting to its parent. This has been fixed. ================(Build #1723 - Engineering Case #750288)================ If a network server was started with the LocalOnly TCP option set, and the server was running on a portable device (eg. a laptop), changes to the IP addresses on the machine would have been reflected in the server. For example, if a new IP address was added, a new listener would have been started on that IP address which could then accept connections from remote machines. This has been fixed. The LocalOnly option now disables IP address monitoring. ================(Build #1719 - Engineering Case #749484)================ In rare cases, after a log rename on the primary, a mirror or copy node could have stopped writing and applying changes. In order for this to have occurred, all of the following conditions must have applied: - the mirror or copy node must have been fairly recently started and requesting log pages (in the case of the mirror, not yet synchronized) - the mirror or copy node must have been writing pages from the primary's current log file - the primary log file was renamed - something (such as a virus scanner) must have accessed the renamed log on the primary (or parent), preventing the primary from opening the file when the mirror or copy node requested pages from it This has been fixed so that the primary or parent will attempt to open the renamed log several times before failing. If the file open still fails after multiple attempts, the primary will display the message "Database "<database-name>" mirroring: failure when opening log file <file-name> for remote server <server-name>" and the mirror or copy node will display the message "Database "<database-name>" mirroring: database is not compatible with primary; files must be replaced" and shut down the database. ================(Build #1719 - Engineering Case #708252)================ The LOAD TABLE statement may insert invalid data values into columns of type NUMERIC. For type NUMERIC the server may insert values that exceeded the precision and scale of the column type definition. This has been fixed. Now values for NUMERIC columns will be cast to the column data type if needed. ================(Build #1717 - Engineering Case #749824)================ If a secure web procedure contained any of the certificate_unit, certificate_name, or certificate_company options, and one or more did not match the certificate used by the server, the connection could have hung or timed out. This has been fixed. ================(Build #1717 - Engineering Case #749170)================ When invoked with a very large integer value, the hours() function could have returned an incorrect result. For example, hours function invoked with a ’12:00’ time and a very large integer argument would return a value like ’22:44’. This has been fixed. ================(Build #1713 - Engineering Case #749622)================ If an archive backup was corrupt in a specific way, it was possible for the database server to crash when attempting to restore it. This has been fixed. ================(Build #1712 - Engineering Case #749628)================ During execution of a Validate statement the server would have taken a data lock on the primary key table of any foreign keys of the table being validated. This has been fixed. ================(Build #1710 - Engineering Case #749387)================ Under certain rare circumstances the server could have become deadlocked. For this to have occurred there must be a table continually undergoing many insertions and deletions. This has been addressed. ================(Build #1709 - Engineering Case #746236)================ Under very rare circumstances, the server may have crashed during server shutdown if a SQL Anywhere debugger was still connected. This has been fixed. ================(Build #1706 - Engineering Case #749169)================ In some rare cases, attempting to modify a database file after the server using that file had been shut down using the Stop Server utility (dbstop) would have failed with 'permission denied'. For this to have occurred, the server must have been using the external environment (e.g. php). This is a very timing sensitive bug and rarely reproduces. This has been fixed. ================(Build #1698 - Engineering Case #769356)================ Under exceptionally rare circumstances, the server may have crashed during concurrent execution of a stored procedure that contained a LOAD TABLE statement. This has been fixed. ================(Build #1697 - Engineering Case #748351)================ Mirror and copy node servers were not applying transaction log changes as efficiently as they could have been. This has been fixed to be more efficient. ================(Build #1697 - Engineering Case #748096)================ If a stored procedure containing a single SELECT statement that uses a key join was inlined, and the connected user was not the procedure owner and has no select permissions on the table(s) in the query, a permission error could have been returned. This has been fixed. ================(Build #1696 - Engineering Case #748450)================ In some rare cases, the server may have crashed while processing HTTP/HTTPS requests. This has been fixed. ================(Build #1691 - Engineering Case #748170)================ In rare timing dependent cases, a copy node that had just processed a Log rRename could have failed assertion 200505, if its child was requesting log pages. This has been fixed. ================(Build #1684 - Engineering Case #747810)================ When a mirror or copy node was shutdown while a connection was blocked on a lock held by an internal connection which was applying log operations, the shutdown could have hung. Note that mirror or copy node databases shutdown and restart automatically for a number of internal reasons, for example, the mirror database can shutdown and restart if the connection between the partner servers becomes disconnect but the primary partner is still running. This has been fixed. ================(Build #1681 - Engineering Case #747649)================ In rare timing dependent cases, a copy node could have failed with a log mismatch message or an assertion failure when a log rename was performed. In order for this problem to have occurred, in addition to processing the log rename, the copy node needed to be transitioning from requesting to receiving log pages from its parent, or a connection to its parent would have had to be dropped and reconnected. This has been fixed. ================(Build #1681 - Engineering Case #747637)================ If an ODBC application performed a wide or array INSERT, where the number of rows times the number of columns was more than 32767, the INSERT may not have been as efficient as it should have been. This has been fixed so that the wide or array INSERT is more efficient in this case. ================(Build #1680 - Engineering Case #696469)================ Executing an UNLOAD ... TO FILE or UNLOAD ... INTO CLIENT FILE statement with the APPEND ON BYTE ORDER MARK option, would have written a byte order mark even if the unload file already existed with a size greater than zero bytes. This has now been fixed. ================(Build #1674 - Engineering Case #747038)================ When the HAVING clause of a query contained predicates of the following form, the server attempted to estimate the selectivity based on the column statistics: - HAVING SUM( column ) <= constant - HAVING SUM( column ) < constant - HAVING SUM( column ) > constant - HAVING SUM( column ) >= constant so that incorrect selectivity could be returned in the following cases: - the column was a number but the constant was not (e.g., it was a date). - the comparison relation was <> or = (these can not be estimated). - the comparison was of the form SUM(column) LIKE string_constant. This has now been corrected. ================(Build #1673 - Engineering Case #747819)================ When the server encounterd a file error when writing to an ETD file, it should have returned it as a SQL error; but in some cases, the error is not returned. This has been fixed. ================(Build #1673 - Engineering Case #747141)================ In very rare timing dependent cases, a copy node's transaction log could have been corrupted if the primary failed over to the mirror. In order for this to have occurred, the old primary would have needed to still be running and not in the process of stopping during the failover (for example, running extremely slowly due to lack of machine resources). If this problem did occur, the most likely failures would be assertion 100902, 100903 or 100904, but other failures were also possible. This has been fixed. ================(Build #1671 - Engineering Case #747053)================ If the primary server failed and the mirror took over as the new primary, the old primary could have failed to start and reported "database is not compatible with primary; files must be replaced" in the console log, even though it should have been able to start. In order for this failure to have been reported when it should not have been, there had to be over 64 transaction log pages since the last checkpoint. This has been fixed. Note that this failure (database is not compatible) can still validly occur in certain cases. ================(Build #1671 - Engineering Case #746924)================ In very rare, timing dependent cases, mirroring servers (likely more than one) could have hung indefinitely. If, while processing ALTER DATABASE SET PARTNER FAILOVER, connections between the primary and mirror servers timed out, or were dropped, before the failover operation completed, the primary server could have stopped accepting connections. The sa_mirror_server_status log_written offset could have been incorrect about the time a log rename occurred. ================(Build #1664 - Engineering Case #746586)================ In rare cases, the server may have crashed or returned an incorrect SQL error if the UPDATE clause of a bypass update statement had an invalid subselect as table-expression. This has been fixed. ================(Build #1657 - Engineering Case #746290)================ On Windows systems, CPUs numbered 32 and above were not detected correctly and treated as offline. This has been fixed ================(Build #1651 - Engineering Case #696753)================ Under rare circumstances, executing a CREATE TEMPORARY PROCEDURE statement could have crashed the server. This has now been fixed. ================(Build #1646 - Engineering Case #745648)================ In rare timing dependent cases, near when a transaction log rename was being performed, the sa_mirror_server_status log_written or log_applied columns could have been inaccurate. This has been fixed. ================(Build #1644 - Engineering Case #743578)================ In extremely rare, timing dependent cases, the server could have crashed when a database was starting. This has been fixed. ================(Build #1643 - Engineering Case #745394)================ When connected using TDS (Open Client or jConnect), executing a procedure that contained a SELECT INTO statement, could have caused the server to crash. Note that chained mode needed to be off for this to have occurred. This has now been fixed. ================(Build #1636 - Engineering Case #740799)================ Performance of the server when run on Linux systems was much slower than when run on Windows. The performance of the server has now been improved so the speed on Linux should now be comparable to speeds on Windows. ================(Build #1632 - Engineering Case #731483)================ Several issues with database mirroring and read-only scale-out have now been fixed. 1) If the mirror or copy node was requesting pages from the primary or parent (it had recently started and had not caught up to the current log operation) and renamed log files required by the mirror or copy node had been deleted on the primary or parent since the mirror or copy node started requesting pages, then the mirror or copy node could have stopped applying log operations or failed with the assertion 100904. This has been fixed so that that primary or parent now correctly detects this case (a required renamed log file has been deleted) and logs the message "Database <DBName> mirroring: failure when requesting pages on remote server <ServerName>: missing transaction log with start offset <Offset>" (where <DBName>, <ServerName> and <Offset> are replaced with appropriate values). If this occurs, the mirror or copy node will log the message " Database "asatest" mirroring: database is not compatible with primary; files must be replaced” and the database and possibly the server will stop. 2) The message "Database server shutdown due to incompatible files for database mirroring” could have been displayed if an incompatible log file was detected even though the server was not stopped. If there is more than one database running on the server, the affected database is stopped, but the server is not stopped. This has been fixed so that this message is only logged if the server is actually being stopped. 3) In rare timing depending cases, after one or more ALTER DATABASE SET PARTNER FAILOVER statements, neither partner could have taken the role of primary. This has been fixed. As a workaround, the ALTER DATABASE ... FORCE START statement can be used to force a partner to take over as partner if this problem occurred. 4) If a copy node or async mirror got significantly behind writing log pages, it could have caused requests to the primary database to block for more than a minute. This has been fixed so that the primary will not be blocked for more than about 10 seconds. ================(Build #1631 - Engineering Case #743662)================ On a heavily loaded server, client connections could have been incorrectly dropped in timing dependent cases. If this occurred, the client would likely get "Communication error" error and the server would report " Disconnecting Client - 120 seconds since last contact" (or a different number of seconds) in the console log. This has been fixed so the dropped connections are less likely. Note that these errors can still correctly occur if there is a network issue or if either the client or server computers completely bog down (most likely due to limited resources). ================(Build #1625 - Engineering Case #743778)================ Validation of a recently truncated table using a read-only server may have caused a crash. This has been fixed ================(Build #1620 - Engineering Case #702506)================ The LOCATE function may have returned an incorrect result if the search string contained multi-byte characters. This has now been fixed. ================(Build #1613 - Engineering Case #743871)================ If the query of a cursor used the OrderedGroupBy algorithm in its execution plan, and was used to perform fetches that reversed direction of the scan, incorrect results could have been returned. This could have been observed for a cursor that fetched the first row, then second, then returned to the first row of an aggregate query. This has been fixed. ================(Build #1613 - Engineering Case #740708)================ In SYSUSER, SYSEXTERNLOGIN and SYSLDAPSERVER system views, columns containing password hashes were visible to users without SELECT ANY TABLE privilege. This has been fixed. Note that in order to apply this fix, existing version 16.0 database will need to be upgraded once the server containing the fix is deployed. New databases created with the fixed version of the server do not need to be upgraded. ================(Build #161 - Engineering Case #787345)================ The changes for Engineering case 783569 introduced the possibility of a server crash when executing a Remote Data Access statement when an error was encountered during creation of an underlying cursor. This crash has now been fixed. ================(Build #1609 - Engineering Case #743448)================ If a column with a default was added to a table with existing data and the default was subsequently changed, some rows in the table could have been left in an inconsistent state resulting in assertions, crashes or incorrect results. This has now been fixed. A work around would be to unload and reload the table before the second ALTER. ================(Build #1608 - Engineering Case #743692)================ Servers with the fix for Engineering case 742355 could have returned garbage characters for connection_property( 'Name' ), if there were non-ASCII characters in the CON connection parameter. This has been fixed. ================(Build #1605 - Engineering Case #743341)================ In rare cases, if an application made an external environment call that in turn performed a server-side request, then the server could have crashed or lost an update if the server-side request resulted in a deadlock error. This problem has now been fixed. ================(Build #1600 - Engineering Case #742949)================ If a user was logged into a mirror or a copy node, or an object owned by that user was in use on a mirror or copy node, and the user was dropped on the primary server using the REVOKE CONNECT statement, the mirror or copy node would have stopped with a fatal assertion. This has been fixed. Connections logged in as the dropped user, as well as connections using objects owned by that user, will now be dropped before the user is dropped. ================(Build #1598 - Engineering Case #742872)================ The server could have become unresponsive or extremely slow if it was involved in multiple high-availability configurations (for example, if it was running a number of read-only copy nodes), and there were networking problems causing loss of connectivity. This would have been more noticeable on single-processor machines. This has been fixed. ================(Build #1595 - Engineering Case #732114)================ The first insert of a blob into a table after the database was started may have taken longer under the following conditions: - the blob to insert was longer than its column INLINE value - the table contained a large number of blobs that were longer than about 8 database pages (blobs with blob index) - the columns containing these blobs were created with a blob index (default) - large parts of the table were not in cache This has been fixed. To work around the problem the blob indexes can be dropped by running the following statement for all long varchar or binary columns with blobs longer than 8 pages: alter table <table-name> alter <columns-name> no index To fix the problem in existing databases rebuild the database or drop and recreate the blob index by running ALTER TABLE. This must be done with a fixed version of the server and only on table columns with above conditions. ================(Build #1592 - Engineering Case #742365)================ A number of incorrect behaviors could have occurred when using database mirroring, including: - in rare, timing dependent cases, a mirror or copy node could have crashed, hung or failed assertion 102010 - when a mirror or copy node reconnected to the primary or parent, it was possible for it to not request, write or apply log pages. - when a mirror was yielding to a preferred server, or the "ALTER DATABASE SET PARTNER FAILOVER" statement was executed on the primary, it was possible for the previous mirror to not take over as the primary (both partner servers could have had the mirror role) These problems have been fixed. ================(Build #1588 - Engineering Case #742355)================ SQL Anywhere ADO.NET drivers without the fix for Engineering case 741707, could have sent invalid connection pooling requests to the server which could have resulted in a server crash. This has been fixed so that the server will not crash even if the client makes invalid connection pooling requests. SQL Anywhere ADO.NET drivers without the 714707 fix may have requests fail with the error "Run time SQL error -- *** ERROR *** Assertion failed: 104909". The ADO.NET driver needs to be updated if this occurs. ================(Build #1587 - Engineering Case #665195)================ If a column or variable name was misspelled in a function that was inlined, and the scope into which inlining was performed contained an object with a matching name, incorrect results, or an incorrect error, could have been returned. This has been fixed. Example: CREATE FUNCTION func1( @a integer) RETURNS INTEGER BEGIN DECLARE @ret INTEGER; SET @ret = ( a + 10 ) / 100; RETURN @ret; END; SELECT a, func1( b + c ) as ret FROM tab; Expected error is ‘Column “a” not found’. ================(Build #1585 - Engineering Case #736004)================ Some window functions, including MIN and MAX, could have given incorrect results if called over a column with data type TIMESTAMP WITH TIME ZONE. This has been fixed. ================(Build #1585 - Engineering Case #735216)================ In rare situations, queries containing a Merge Join appearing below another Merge Join may have failed assertion 106104: "Field unexpected during compilation". This has been fixed. ================(Build #1584 - Engineering Case #742016)================ If a certificate used one of a number of algorithms (including SHA256, SHA384, and SHA512) for signing, SQL Anywhere would not have been able to use it for TLS or HTTPS. An error code of 12357 or 12394 may have been displayed. This has been fixed. ================(Build #1584 - Engineering Case #742013)================ Starting a database could have taken 10 seconds or more longer than it should have taken if the -ar, -ad or -xp database options were used with servers running on Windows. This could have occurred if files other than the database and current transaction log files for the server that was attempting to start the database were in the same directory as the database's log file, and these files were locked by the current server or another process. For example, if a single directory contained database files in use by a different server process, or a console log file in use by the server starting the database, a server starting the database with -ar, -ad or -xp would have started slowly. This has now been fixed. As a workaround, the database files could be put in a directory containing only the database files for a single database. ================(Build #1584 - Engineering Case #742010)================ A mirror server or copy node could have crashed if snapshot isolation was enabled and read-only connections were committed or rolled back. This has been fixed. ================(Build #1583 - Engineering Case #741971)================ A TLS error (for example, a problem with a server’s certificate) that occurred when executing a secure web service may have returned the error “The secure connection to the remote host failed: <NULL>” or “HTTP request failed. Status code ‘0’”. On MacOS systems, the message “The TLS handshake failed, error code 0” would have been displayed on the server console. This has been fixed. ================(Build #1583 - Engineering Case #741724)================ If a DSN contained a connection string with double quotes (eg. “server=MyServer;start=’dbeng16 -o \”file with spaces.txt\” ’ ”), the output from dbdsn -cm (intended to be a dbdsn command that would re-create the DSN) would have incorrectly escaped the double quotes. This has been fixed. ================(Build #1583 - Engineering Case #740897)================ If a web service was created with authentication off, attempting to execute the procedure while specifying a user that required LDAP authentication would have failed. This has been fixed. ================(Build #1582 - Engineering Case #741870)================ If the database server was started with the –fips option, but the FIPS library was not available, the server would have given an error and then hung. The server process would have to have been killed. This has been fixed. ================(Build #1576 - Engineering Case #741547)================ Authenticating with LDAP using an empty password could have caused the server to crash. This has been fixed. ================(Build #1576 - Engineering Case #741546)================ Authenticating with an LDAP server may have failed, even when the correct user name and password were given. This is dependent on the LDAP server being used and would not have been intermittent (i.e. if it failed, it failed all the time). This has been fixed. ================(Build #1568 - Engineering Case #741078)================ A server could, in rare cases, have failed assertion 104301 ("Attempt to free a user descriptor with non-zero reference count") on database shutdown if there were active external environment calls at the time of shutdown request. This problem has now been fixed. ================(Build #1566 - Engineering Case #740895)================ If a NULL byte was used in the string provided to the DELIMITED BY, ROW DELIMITED BY, COMMENTS INTRODUCED BY, QUOTE, or ESCAPE CHARACTER clauses of a LOAD TABLE statement or OPENSTRING expression, then the server would have used all characters prior to the first null byte as the argument to the option. For example, if the user specified DELIMITED BY '#\x00@' then the server would use '#' as the column delimiter. This problem has been fixed. ================(Build #1563 - Engineering Case #740792)================ The system function xp_getenv() could have become "sql security definer" in databases initialized to run system procedures as definer. However, all new procedures in version 16 and higher are supposed to remain as “sql security invoker.” This would have happened if a version 16 database was initialized with either the -pd flag for dbinit, or the "system procedure as definer on" clause was used in either the CREATE DATABASE or the ALTER DATABASE UPGRADE statements. This has now been fixed. In order to repair the function in an existing database, run ALTER DATABASE UPGRADE PROCEDURE ON with an upgraded server. ================(Build #1563 - Engineering Case #740784)================ When OPENSTRING() is used in the FROM clause, the ROWID() function can be used to get the row number of each row read from the string. In execution plans where the same OPENSTRING() was executed more than one time, the ROWID() values for second and subsequent executions would not have given the correct line number: they would have continued to increase. In addition, error messages for rows loaded from the string value could have reported incorrect line numbers. This has been fixed. ================(Build #1562 - Engineering Case #740787)================ The connection property ApproximateCPUTime was reporting twice the amount of CPU time consumed by a connection. This has been corrected. ================(Build #1561 - Engineering Case #740701)================ Calling the system stored procedure dbo.sp_objectpermission() would in some cases have returned the string 'NULL' in some of the result columns, instead of returning the NULL value. This problem has been fixed. Note that a database upgrade is required to get this fix. ================(Build #1559 - Engineering Case #740649)================ Under very rare conditions, the server may have entered an infinite loop while performing massive amounts of concurrent inserts. This has now been corrected. ================(Build #1559 - Engineering Case #735005)================ If the temporary directory used by the server (specified by one of the SATMP, TMP, TMPDIR, or TEMP environment variables) was longer than about 48 characters (differs by specific platform), clients would not have been able to connect to servers over shared memory, they would simply fail to find the server. This has been fixed. ================(Build #1558 - Engineering Case #740441)================ If certain SMTP errors occured during xp_sendmail, the error code and text returned by xp_get_mail_error_code() and xp_get_mail_error_text() may have been 250 and “2.0.0 Reset state” respectively, regardless of what actual error occurred. This has been fixed to return the correct SMTP error details. ================(Build #1555 - Engineering Case #740400)================ In rare cases, a copy node or mirror server could have used more memory than expected. If this occurred, the extra memory would typically have been less than 1MB. This has been fixed. ================(Build #1554 - Engineering Case #701648)================ Procedure profiling results would have shown an invalid execution time if the total execution time of the request exceeded 4,294,967,295 microseconds. This has been fixed. ================(Build #1553 - Engineering Case #740130)================ Under very rare circumstances, DML operations on a table with an immediate text index that used an external prefilter or termbreaker library, could have caused assertion failures or other issues with the database server. This problem has now been fixed. ================(Build #1548 - Engineering Case #739928)================ When the 64-bit version of the Information utility (dbinfo) was run on Mac OS X systems with the -u option, the values for the “Index Pages”, “%used (index pages)” and “Percent of File” columns could have been very large positive or negative numbers. This has been fixed. ================(Build #1547 - Engineering Case #739706)================ If a server was started with a database that contained an invalid path to the transaction log file, the startup message and console log could have contained garbage characters. This has been fixed. ================(Build #1547 - Engineering Case #681579)================ For certain queries containing the built-in function ARGN(), the ARGN() expression may either have returned an incorrect value due to incorrectly matching an earlier case in the expression, or caused the server to crash. The probability of either failure was very small, and depended on both the database page size and the query text; however, the failure was deterministic for a given database and query text. This has been fixed. ================(Build #1546 - Engineering Case #733306)================ The server would have returned the error "Correlation name ... not found" for a query when the following conditions were true: - the query contained a proxy table and a nested query block with an outer reference - the nested query block used a view with a non-flattable select statement - the outer reference in the nested query block could have been pushed into the select statement of the view For example, in the following query T1 is a proxy and would have returned the error "Correlation name 'V0' not found". create view V1 as select 2 as col1 union select 1; select ( select col1 from V1 where col1 = V0.c21 ) as D from T2 V0, T1; This has been fixed. ================(Build #1544 - Engineering Case #739349)================ If an application executed a SQL SECURITY INVOKER JAVA stored procedure with an effective user id that required quoting or that was owned by a user id that required quoting, then the server would have failed the Java procedure execution with a syntax error. This problem has now been fixed. Note, a user id that requires quoting would be one that was either a keyword, or contained a dot (.) or some other unusual character. No other external environment is affected by this problem. ================(Build #1544 - Engineering Case #739187)================ If an application executed the STOP JAVA or STOP EXTERNAL ENVIRONMENT statement for a database scoped external environment (i.e java or clr), then the server and external environment resources associated with the connection would be correctly cleaned up, but the external environment executable would not have been shut down. This problem has been corrected and the executable will now shut down when the STOP JAVA or STOP EXTERNAL ENVIRONMENT CLR statement is explicitly executed and there are no other connections using the database scoped external environment. ================(Build #1543 - Engineering Case #739269)================ A procedural statement of the form "IF [NOT] EXISTS( simple-subselect ) ..." may have failed with a warning or an error. The warning "The result returned is non-deterministic" was returned if the subselect did not contain an ORDER BY clause. The error "Cursor has not been declared" was returned if the THEN clause, but not the ELSE clause, contained a SELECT statement that returned a result set, and the IF condition evaluated to 'false'. This has been fixed. ================(Build #1539 - Engineering Case #739248)================ CREATE TABLE allows the default for a column to reference a non-existing variable. The server does not validate the DEFAULT clause at creation time. Trying to do the same thing in ALTER TABLE, causes a server crash. This has been fixed, and the server now returns an appropriate error message. ================(Build #1538 - Engineering Case #739154)================ In rare timing dependent cases, the primary could have restarted unnecessarily if the connection between the primary and mirror server was dropped about the same time that the mirror became synchronized. When the primary restarted, the database was stopped and restarted, causing all connections to have been dropped. This has been fixed so that the primary does not restart in this case. ================(Build #1538 - Engineering Case #738994)================ If the mirror tried to take over as primary and failed, in rare cases it was possible for the database on the mirror to have become corrupted. The most likely case of this occurring involved the connection between the two partners being dropped for some reason, and then the mirror server or database being shut down while the mirror was in the middle of attempting to take over as primary. This has been fixed. ================(Build #1538 - Engineering Case #738756)================ If an operation on the database had executed trigger actions that included UPDATE PUBLICATION commands, and that operation was also implicitly rolled back because of an error in the trigger (such as a referential integrity violation), then it was possible for SQL Remote (dbremote) to have sent operations that would have been rolled back in the database. The Log Translation utility (dbtran) may have also shown operations as committed in the translated transaction log that were rolled back. This issue has now been fixed ================(Build #1535 - Engineering Case #739835)================ The body of email messages sent using xp_sendmail would have been truncated at 255 characters. This has been fixed. ================(Build #1535 - Engineering Case #739834)================ If a recipient list for an email sent using xp_sendmail contained leading, trailing, or duplicate semi-colons, the server could have crashed when sending the message. This has been fixed. ================(Build #1535 - Engineering Case #738955)================ In rare timing dependent cases, when the primary server was shutdown, the mirror could have failed to take over as primary. In versions 11 and 12, both the primary and mirror could have both appeared to have hung for two minutes while the primary was shutting down. This has been fixed. ================(Build #1533 - Engineering Case #738912)================ In extremely rare cases, a copy node could have been partially connected to its parent indefinitely, and not write or apply log operations. In order for this to have occurred, a connection to its parent would need have been dropped shortly after it was established. The sa_mirror_server_status() system procedure would have reported the copy node as connected. This has been fixed. As workaround, restarting the copynode in this state would cause it to get and apply log operations from its parent. ================(Build #1531 - Engineering Case #738617)================ In exceptional rare circumstances, the server may have crashed while trying to use the string value of a column DEFAULT or COMPUTE definition. This may have occurred after resetting the column's DEFAULT or COMPUTE definition using ALTER TABLE. Restarting the database after executing the ALTER TABLE prevents the problem. This has been fixed. ================(Build #1530 - Engineering Case #736687)================ Queries with predicates of the form '( T.X = <constant expression> OR T.X = <constant expression> OR ...)' may have had unoptimal plans if the <constant expression> was a variable, host variable, or aliased constant. This has been fixed. ================(Build #1528 - Engineering Case #737917)================ Some queries that contained an invalid GROUP BY or HAVING clause could have failed to give an error. Specific constructs that were not correctly validated include the following: - x IS OF TYPE(…) - x IS [NOT] DISTINCT FROM y - TSEQUAL( x, y ) - array_expression[[ index_expression ]] - (row_type_expression).field_name - Subqueries predicates or subselects that contained joins with outer references in the ON condition In certain cases, these invalid queries failed with a non-fatal assertion failure. In specific circumstances, it was also possible for the server to crash. This problem has now been fixed. ================(Build #1528 - Engineering Case #681578)================ In rare timing dependent cases, a copy node or async mirror could have failed assertion 100927 ("Transaction log page number ... from parent or partner is not expected page number ... "). This problem could have occurred soon after the copy node started, or soon after the copy node reconnected to a parent. This has been fixed. ================(Build #1527 - Engineering Case #738517)================ On Unix platforms, using any of the email functions, while running a version 12.0 database on a version 16.0 server, would have failed with an error: “Dynamic library ‘libdbextf.so’ could not be loaded.” This has now been fixed. There are two workarounds: - ensure that libdbtasks12_r.so.1 is in the library load path (e.g. copy it into the SA 16.0 installation next to libdbextf.so.1) - upgrade the database to version 16.0 ================(Build #1527 - Engineering Case #738510)================ The xp_getenv() built-in function returns long binary. In nearly all cases, since environment variables are typically used to store strings, users would need to cast the long binary value to a char or nchar type, depending on their needs. Optionally, csconvert() would need to be used in order to ensure the strings are in the correct character set, but this is not particularly easy to use. This has been corrected. In databases created after this fix, xp_getenv() will return a long nvarchar. This allows strings to be used easily, while minimizing string mangling due to character set conversion. The environment variable value is obtained in UTF-16 from Windows. On Unix platforms, both the variable name and the variable value are assumed to be in the OS charset, and are then converted to the NCHAR charset for the database. If this is not the case, some variable name lookups may fail or values may be mangled. ================(Build #1523 - Engineering Case #738276)================ For specific types of views and procedures using a WINDOW specification, it was possible for the server to crash when processing a query referencing the view or procedure. This has now been fixed. ================(Build #1523 - Engineering Case #738260)================ A read-only scale-out system, or an asynchronous mirror, could have done more disk writes and more communication with children than necessary, which could have resulted in somewhat slower performance. The unnecessary writes and communication were most likely when many small transactions per second were being committed on the primary or root server. This has been fixed so that the unnecessary writes and communication have been eliminated. ================(Build #1523 - Engineering Case #738247)================ Trace event session names and trace event names were being treated as case sensitive. This has been fixed. ================(Build #1523 - Engineering Case #738246)================ The system trace events, SYS_RLL_StartProcedure, SYS_RLL_StopProcedure, SYS_RLL_StartTrigger, and SYS_RLL_StopTrigger, only logged one letter in the procedure_name field of the trace event. This has been fixed. ================(Build #1523 - Engineering Case #737036)================ Executing a REORGANIZE TABLE statement could have worked sub-optimally and may have left some pages with single rows on them. This has been fixed ================(Build #1521 - Engineering Case #738031)================ Using dbstop -c "...;NODETYPE=..." could have failed. Note that it is not recommended that NODETYPE is used with dbstop since it may not be clear which server will be stopped. This has been fixed. ================(Build #1516 - Engineering Case #693319)================ The sql functions set_bit() and get_bit() incorrectly accepted the value 0 for the bit-position parameter. This has been fixed. Now these functions return an error. ================(Build #1509 - Engineering Case #737186)================ In rare, timing dependent cases, executing the "ALTER DATABASE SET PARTNER FAILOVER" statement could have hung. If this occurred, the server itself was not hung, but the server on which the statement was executed would not accept connections to the database being failed over. This has been fixed. As a workaround, the server that was running as the primary could be stopped, or the database that was running as the primary could be stopped by connecting to the utility database and executing the STOP DATABASE statement. ================(Build #1509 - Engineering Case #736806)================ Under rare circumstances, evaluation of very complex expressions could have caused the server to crash. This has been fixed. ================(Build #1508 - Engineering Case #737159)================ If two separate connections had set the PRIORITY connection option, and the database shut down unexpectedly so as to required automatic recovery, it was possible for the database to fail the automatic recovery with assertion 100904: Failed to redo a database operation (id=#, page_no=0x#, offset=0x###) Error: Permission denied: you do not have permission to set the option 'PRIORITY' This has been fixed. An upgraded database server will now be able to recover the database successfully. ================(Build #1508 - Engineering Case #736680)================ It was possible for a mirror server to have hung for a few seconds or indefinitely, have had connection timeouts, failed, or have had poor performance. It was more likely to see these problems when running multiple mirrored databases on a single server that had automatic multiprogramming level enabled (the default) and there were nearly as many mirrored databases as cores. This has been fixed by ensuring some long running background tasks do not affect the number of tasks controlled by the multiprogramming level. A workaround is to ensure that the minimum multiprogramming level is at least three times the number of mirrored databases. Two new server properties were added by this change: 1) property( 'CurrentMirrorBackgroundWorkers' ): The number of workers currently being used for database mirroring background tasks. These workers are separate from those controlled by the multiprogramming level. 2) property( 'MaxMirrorBackgroundWorkers' ): The highest number of workers used for database mirroring background tasks since the server started. These workers are separate from those controlled by the multiprogramming level. ================(Build #1506 - Engineering Case #736786)================ Two ALTER TABLE ... DROP DEFAULT statements, executed consecutively on the columns that where created in an online fashion, would have cause the server fail assertion 200610 (Attempting to normalize a non-continued row ). This has now been fixed. ================(Build #1505 - Engineering Case #736570)================ Statements that included invalid Transact-SQL outer join predicates, did not give appropriate errors. For example, the following gave an “invalid expression” error: select count(*) c having c *= 0 This has been fixed. ================(Build #1504 - Engineering Case #736547)================ Under rare circumstances, evaluation of a query with complex expressions could have caused a server crash. This has been fixed. ================(Build #1504 - Engineering Case #725690)================ Under rare circumstances, if recovery operations performed on a database included changes affecting immediate text indexes, there was a potential for triggering assertions failures or server crashes. There is also a potential for generating incorrect score values or incorrect results for subsequent queries using the text index. These problems have now been corrected. ================(Build #1501 - Engineering Case #736823)================ If a database was created with the -a or -af dbinit command line options set, or with the "ACCENT RESPECT" or "ACCENT FRENCH" clause on a CREATE DATABASE STATEMENT, and if the database CHAR charset was not a UCA collation, then catalog lookups for tables, procedures, etc, could have been slow because the server would not have taken advantage of the available indexes on the catalog tables. This problem has been fixed. As a side effect of the fix, dbunload now generates dbinit command lines or CREATE DATABASE statements that use fully explicit collation specifications such as "1252LATIN1(CaseSensitivity=Respect)" and it no longer puts -a, -af, ACCENT RESPECT, and ACCENT FRENCH on dbinit command lines or CREATE DATABASE statements. By using fully explicit collation specifications, dbunload also no longer puts "-c" or "CASE RESPECT/IGNORE" on dbinit command lines or CREATE DATABASE statements. ================(Build #1501 - Engineering Case #736526)================ Under very rare conditions, reading indexes containing compressed data could have lead to a server crash with an ‘invalid memory read’ error. This has been fixed. ================(Build #1500 - Engineering Case #736575)================ In some cases, a statement that contained a dotted reference was misinterpreted with the dotted reference interpreted as a row access. For example: select (dt.ci).nosuchcolumn from ( select cast(row(25 as val1,27) as row(val1 int, ci int)) ci ) dt would have incorrectly returned DT.ci.ci instead of an error. This has now been fixed. ================(Build #1500 - Engineering Case #736574)================ When using a statement that contained a ROWID() expression in a search condition, it was possible for the statement to fail with a nonfatal assertion failure. For example: select first 1 from T_Exprs where col_str >= CAST( ROWID(T_Exprs) AS LONG VARCHAR ) order by 'a' would have failed with the following errort: Could not execute statement. Run time SQL error -- *** ERROR *** Assertion failed: 106105 Unexpected expression type dfe_FieldRID while compiling This has been fixed. ================(Build #1500 - Engineering Case #736573)================ When executing statements that included invalid uses of the NUMBER function, the statement could have failed with a non-fatal assertion failure. For example: select COUNT( DISTINCT NUMBER(*) ) from sys.dummy would have failed with the following error: *** ERROR *** Assertion failed: 106103 (16.0.0.1320)[asatest] NUMBER(*) is not associated This has been fixed so that an appropriate error is now returned. ================(Build #1500 - Engineering Case #736572)================ When converting a value of Row type with a field of Array type, an inappropriate error may have been given. For example: create or replace variable var_row row(col_intarray array(3) of int); set var_row = var_row; set var_row = var_row; The second SET statement would have returned the error: Expression is not an array SQLCODE=-1666, ODBC 3 State="HY000" This has been fixed. ================(Build #1500 - Engineering Case #736571)================ Certain SQL constructs containing selectivity estimates could have caused the server to crash. This has been fixed. ================(Build #1500 - Engineering Case #736562)================ When combining values, where one was an array and the other was a NULL literal value, it was possible for the result to fail with an unexpected error. For example: create or replace variable var_strarray array(3) of long varchar; select cardinality( coalesce( var_strarray, null ) ); would have failed with the error: Could not execute statement. Expression is not an array SQLCODE=-1666, This has been fixed. ================(Build #1500 - Engineering Case #736546)================ The LIKE predicate allows an optional ESCAPE argument which must be a single character. In some cases where the string and pattern could be optimized, an error was not returned for ESCAPE arguments that did not consist of a single character. For example: select if 'abc' like 'abc' escape '99' then 1 else 0 endif should have failed with an error, but did not. This has been fixed. ================(Build #1500 - Engineering Case #736544)================ When creating or altering a table, CHECK constraints were allowed that were not valid. These failed when evaluating the CHECK at execution time. - in TABLE checks, unknown column/variable names were permitted if they started with '@' - aggregate and window functions were permitted - ROWID() and NUMBER() were permitted - host variable references were permitted This has been fixed. These now give appropriate errors when the CHECK is created. ================(Build #1486 - Engineering Case #735454)================ When reading an event trace (.etd) file, the Event Trace Data File Management utility (dbmanageetd) did not correctly decode the event severity associated with events in the log. The severity reported by dbmanageetd was incorrect and filtering by severity level (-fl) would not work correctly. This problem has been fixed. Note that the files themselves are correct and only a new dbmanageetd is required to interpret them correctly. ================(Build #1486 - Engineering Case #735452)================ In a mirroring setup, if a copy node lost its connection to its parent, and MaxDisconnectedTime had been specified, it was possible for the server to noticeably exceed the time before shutting down. This has been fixed. The shutdown time should now be much closer to the MaxDisconnectedTime if the copy node is unable to re-establish a connection to its parent, alternate parent, or the primary. ================(Build #1486 - Engineering Case #735344)================ After performing a calibration using the ALTER DATABASE CALIBRATE statement, it was possible for queries to execute slowly on the database due to an error in the recorded calibration data. This problem was most likely to happen with faster computers. This problem has been fixed. A workaround is to use “ALTER DATABASE RESTORE DEFAULT CALIBRATION” to remove the incorrect calibration data. ================(Build #1486 - Engineering Case #674545)================ If a function that can be inlined was invoked with an argument that was an expression with restrictions on where in the query it can appear (for example, an aggregate function), a syntax error could have beeen returned. This has been fixed. ================(Build #1485 - Engineering Case #735358)================ In rare situations in a high availability setup with TLS connections, it was possible for the primary server to hang while doing a commit. This is the same fix that was done for Engineering case 674782, but now available on Unix platforms. ================(Build #1485 - Engineering Case #735267)================ When using Snapshot isolation, a statement-level or transaction-level snapshot may remain active while other transactions completed. Previously, the time to close the snapshot was proportional to O(N^2) for N transactions that completed with the snapshot open. With one hundred thousand transactions, this could have taken over a minute to close a single snapshot. With one million transactions, this could have taken over 100 days to close a single snapshot. During this time, other transactions were not allowed to start or stop. The server would have appeared to be fully busy on a single core. This performance has been improved; for one hundred thousand transactions, the new algorithm completes in 13 milliseconds (compared to 80205 milliseconds previously). Further, it was possible for the server to crash with specific access plans relating to viewing snapshot meta-data. This has also been fixed. A best practice is to ensure that the number of transactions tracked by the server is minimized; for example, by keeping the length of transaction snapshots short (commit as soon as possible). For statement-level snapshots, the snapshot is closed when the statement is closed. For cursors opened WITH HOLD (for example, using ODBC), the snapshot will not be closed when a COMMIT or ROLLBACK is performed; it is delayed until the statement is closed. Best practice recommends closing these cursors promptly. The sa_snapshots() procedure can be used to monitor active snapshots and sa_transactions() monitors transactions being tracked due to snapshots. ================(Build #1484 - Engineering Case #735226)================ In rare situations in a high availability mirroring setup, it was possible for the primary mirror server to hang while doing a commit if the connection to the partner server was lost. This has been fixed. ================(Build #1479 - Engineering Case #731211)================ A query of the form “select * from T, R where T.X IN (R.X, T.Y )” may have had a suboptimal execution plan if an index existed on the column T.X. This has been fixed. ================(Build #1477 - Engineering Case #734589)================ In a mirroring configuration, it was possible for the primary mirror server to restart sooner than expected when its partner was converted to a copy node. This has been fixed. ================(Build #1477 - Engineering Case #732453)================ If an application made a Java External Environment call to a Java method that made server side requests, then the Java External Environment may have hung when the Java method created or prepared a large number of server-side statements but did not explicitly close statements that were no longer needed. This problem has now been fixed. ================(Build #1474 - Engineering Case #734486)================ If the computer on which a server is running was improperly configured, it was possible for property(‘TcpipAddresses’), property(‘HttpAddresses’), or property(‘HttpsAddresses’) to return a string with multiple consecutive or trailing semicolon characters, eg. “1.2.3.4;1.2.3.5;;1.2.3.6;;;1.2.3.7”. This has been fixed. ================(Build #1473 - Engineering Case #734158)================ Executing a batch or stored procedure that contained the ALTER DATABASE UPGRADE statement would very likely have crashed the server. This problem has now been fixed. Note that executing ALTER DATABASE UPGRADE within a batch or stored procedure is not recommended when using SQL Anywhere 16 and up, since the database will automatically be shut down once the upgrade completes. ================(Build #1472 - Engineering Case #726959)================ In a high availability mirroring setup, if the connection between the mirroring partners dropped, but the connections to the arbiter were stable, it was possible for the primary to have restarted. This has been fixed. ================(Build #1471 - Engineering Case #734042)================ Query plans containing a HashGroupBy operator could have under-performed in some cases. This was only possible when there were a large number of groups (~10,000 or more) and where the data types of the aggregate functions included strings, bit vectors, numerics, or other BLOBs. This has now been fixed. ================(Build #1471 - Engineering Case #732745)================ If a SELECT statement with an INTO clause contained a variable in the select list, then the temporary table was created with a not-nullable or nullable column definition depending on the value of the variable. This has been fixed. The column definition will now always be nullable in this context. ================(Build #1470 - Engineering Case #734049)================ If the trigger definition resulting from ALTER TRIGGER statement execution conflicted with an existing trigger, the original trigger could have been deleted, and a wrong error returned. This has been fixed. This change also introduces a change in the algorithm used to decide the order of firing triggers when an ORDER clause is not specified, or multiple triggers with the same order and combination of events were created. For example, this may change the order in which triggers created with “UPDATE ORDER 2”, “UPDATE, DELETE ORDER 2” and “UPDATE OF <col> ORDER 2” are fired. Note that documentation explicitly recommends specifying different ORDER values for triggers with the same event. If your database contains such sets of triggers, and the order of firing is important, please alter the triggers to reorder them accordingly. In general, both UPDATE ORDER 1 and UPDATE OF <col> ORDER 1 triggers will fire before any UPDATE … ORDER 2 triggers are fired. A unique ordering between UPDATE and UPDATE OF <col> triggers is still recommended. ================(Build #1462 - Engineering Case #733313)================ The mirror partner server in a mirroring setup may have failed to take over immediately as primary, and instead restarted, when the primary mirrored database became unavailable but the server was still running. This could have happened when the primary mirror server was shutting down, or if the “STOP DATABASE” statement has been used on the primary server. This has been fixed. ================(Build #1453 - Engineering Case #733181)================ When executing a statement with a parallel execution strategy, it was possible for the statement to fail to complete with an error such as the following: All threads are blocked [-307] ['40W06'] This problem was more likely to occur with a UNION query where multiple branches could use parallel execution. This problem has now been fixed. ================(Build #1453 - Engineering Case #732727)================ Under rare circumstances, executing a stored procedure call could have crashed the server. This has been fixed. ================(Build #1452 - Engineering Case #732731)================ Issuing a CALL PROCEDURE statement from a client where the procedure accepted a ROW or ARRAY argument, but did not have one in its result set, could have failed, either with SQLCODE -1599 (Invalid use of collection type), or (on ODBC with smart describing enabled), by disconnecting the client. This has been fixed. A workaround is to put the procedure in the FROM clause of a SELECT statement, rather than call it immediately. ================(Build #1452 - Engineering Case #732730)================ Under some circumstances, the array concatenation operator could have failed to evaluate correctly. This has been fixed. ================(Build #1451 - Engineering Case #732583)================ The system procedure sp_parse_json() would not have accepted quoted strings containing the characters “,:{}”. This has been fixed, these characters are now accepted. Note that SQLAnywhere does not permit the characters ‘[‘ or ‘]’ inside identifiers, and so it follows that these will not be accepted in JSON quoted strings. Also, sp_parse_json() would have accepted bare pairs that were not in a row. This has been fixed to match with JSON standards. For example, the following would not have given an error, but is now no longer accepted: call sp_parse_json('tvar','a:b'); The FOR JSON clause would have escaped the forward slash character (‘/’) in double quoted text. This has been fixed and forward slashes will no longer be escaped. ================(Build #1447 - Engineering Case #732247)================ In rare timing dependent cases, if the primary server was stopped, the mirror server could have failed to take over as the new primary server. This has been fixed. ================(Build #1445 - Engineering Case #732156)================ Validating a database on a server with concurrent activity could have resulted in failed assertions, or a server crash. This has now been fixed. ================(Build #1445 - Engineering Case #730149)================ On rare occasions the server would have crashed on shutdown when running on Linux systems. The crash would have occurred when stopping shared memory connections. This problem has now been fixed. ================(Build #1439 - Engineering Case #731731)================ When running on Solaris systems, if the server had accepted a new connection, but the client side closed its socket right away, the TCP listener would have been stopped and the message "TCP Listener shutting down (130)" was be displayed on the server console. This has been fixed. ================(Build #1437 - Engineering Case #731448)================ Execution of loops with a large number of iterations could have been slower in 16.0.0 than in 12.0.1. For the problem to have occurred, the loop condition, or statements executed in the loop, had to use variables. This has been fixed. ================(Build #1436 - Engineering Case #731334)================ If a SQL Anywhere 16 database was created with “dbinit -pd” or “CREATE DATABASE … SYSTEM PROCEDURE AS DEFINER ON”, or if an older database was upgraded using “dbupgrad -pd y” or “ALTER DATABASE UPGRADE … SYSTEM PROCEDURE AS DEFINER ON”, then attempting to perform a “FORWARD TO”, or make use of any of the sp_remote_... procedures, would have failed with an invalid userid or password error. This problem has now been fixed. A database upgrade will be required to apply this fix. Note that two possible workarounds are: 1) create an externlogin for dbo, or 2) set the new extern_login_credentials database option to “Login_user” ================(Build #1432 - Engineering Case #726536)================ When using the START EXTERNAL ENVIRONMENT statement to start a connection-scoped external environment, and then disconnecting later on without actually making any external environment calls to the connection-scoped external environment, then there was a small chance the server would have crashed. This problem has now been fixed. ================(Build #1431 - Engineering Case #730776)================ In very rare, timing dependent cases, it was possible for one or more copy nodes to have failed an assertion after the primary was shutdown and the mirror took over as the primary. The assertion would have indicated a problem applying operations from the transaction log (for example assertion 100903). This problem has now been corrected. ================(Build #1431 - Engineering Case #725391)================ If an Open Client or jConnect application attempted to perform a positioned update on a table that had an NCHAR based column in the primary key, then there was a chance the application would have hung. This problem has now been fixed. Note that this problem did not affect non-TDS based clients. ================(Build #1426 - Engineering Case #724507)================ In rare circumstances, a server that was running diagnostic tracing to a remote server may have crashed if the diagnostic tracing server or database stopped, or if the diagnostic tracing server or database stopped and tracing was detached at the same time. This has been fixed. ================(Build #1423 - Engineering Case #730111)================ Attempting to creating a primary key on a table with an existing primary key could have return an “Index name not unique” error, rather than an error reporting the existence of a primary key. This has been fixed. ================(Build #1422 - Engineering Case #729867)================ After calling the system procedure sa_server_option( ‘RequestTiming’, ... ), connections may have gathered or returned request timing values inconsistently. In particular, request timing may have been enabled when the connection was established but disabled immediately after changing the option, or request timing may have been disabled when the connect was established but ignored when the option was enabled. Also, if a pooled connection was reused, the values tracked by request timing where not reset as they would be if a new connection was established. This has been fixed so that request timing is enabled or disabled at connect time (including when reusing a pooled connection). Once the connection has been established, request timing will remain enabled or disabled for the connection until it is disconnected, regardless of sa_server_option( ‘RequestTiming’, ... ) calls during the life of the connection. In addition, if a pooled connection is reused when request timing is enabled, the values tracked by request timing are reset. Note that the database and server properties that correspond to those enabled by the –zt server option are only updated for connections that have request timing enabled or disabled at their individual connection time. ================(Build #1418 - Engineering Case #729661)================ If the host running the cloud primary had a large number of IPv4 and IPv6 addresses, then there was a chance that other cloud servers would have failed to start up because they could not connect to the cloud primary. This problem has now been fixed. ================(Build #1418 - Engineering Case #726235)================ Under rare, timing and execution plan dependent circumstances, execution of a parallel query plan could have caused the server to hang. This has been fixed. A workaround is to disable intra-query parallelism for affected queries. ================(Build #1417 - Engineering Case #724355)================ In very rare circumstances, the server may have crashed while performing a query sort operation if the sort key was very long and the query was low on cache space. This has been fixed. ================(Build #1416 - Engineering Case #727669)================ Under rare circumstances, a simple statement using variables or builtin functions could have returned incorrect results. This could only happen if the simple statement was processed by bypassing the query optimizer. This has been fixed. ================(Build #1416 - Engineering Case #726396)================ If a SQL batch contained multiple DECLARE statements for local variables with assignments of the default value or an initial value, then the assignments were only executed for the last DECLARE variable-name statement of the batch. This has been fixed. ================(Build #1415 - Engineering Case #721976)================ In some cases, when a CHECK constraint was defined on a table, errors in the constraint were not detected when the constraint was created, even if they could have been detected at creation time. These errors were instead reported when the check constraint was evaluated. For example, the following CREATE TABLE would have succeeded but the subsequent INSERT would have failed with an error. create table T_ColumnCheck( x long varchar, check (x is distinct from COUNT( DISTINCT ( 1 ) )) ); insert into T_ColumnCheck(x) values(1) This has been fixed. Some errors can not be detected when the constraint is created (for example, data exceptions). ================(Build #1412 - Engineering Case #729006)================ Creating a procedure with a right curly-bracket "}" in procedure_name(e.g CREATE PROCEDURE “P1{}”()…) would have failed. This has been fixed. ================(Build #1403 - Engineering Case #728223)================ If an HTTP request was incorrectly formatted in a particular way, the server could have crashed. This has been fixed. ================(Build #1387 - Engineering Case #727601)================ If a statement contained an IN predicate on a column and one or more other sargable predicates on the column, then the statement might not have executed as efficiently as it could have. When optimizing predicates, the range of values within the IN list was not considered when finding tautologies, contradictions, or a narrower interval of validity. This has been fixed. For example, the following predicates are now optimized as follows (where, UDF is a user-defined function): x=3 and x in (1,2,3) --> x=3 x>=3 and x in (1,2,3) --> x=3 x>3 and x in (1,2,3) --> FALSE x=2 and x in (1,2,UDF(3)) --> x=2 x=3 and x in (1,2,UDF(3)) --> x=3 and x in (1,2,UDF(3)) ================(Build #1386 - Engineering Case #727315)================ Under rare circumstances, the server may have crashed if RememberLastStatement was turned on and a statement being executed was too complex. This has been fixed. ================(Build #1386 - Engineering Case #722646)================ When an application that was connected using Open Client or jConnect executed a query that involved parameters, and the query generated a syntax error, the server could have crashed. For the crash to occur at least one of these parameters had to have been a string or binary parameter that was greater than 250 bytes in length, and an additional tinyint, smallint, int or bigint parameters followed the string or binary parameter. This problem has now been fixed.

SQL Anywhere - Sybase Central Plug-in

================(Build #2313 - Engineering Case #800410)================ When unloading a subset of tables into a new database, the Unload Database wizard attempts to prevent selecting a table if it will cause the reload to fail. The wizard would have prevented selecting a table that contained a column with a domain data type. Selecting a table that contains a column with a domain data type is now only prevented if the domain is owned by a user other than SYS. ================(Build #2289 - Engineering Case #798718)================ Attempting to copy and paste column definitions in an unsaved table in the table editor could have caused SQL Central to crash. This has been fixed. ================(Build #2273 - Engineering Case #797608)================ Attempting to open the Set Primary Key wizard while a primary key constraint, foreign key constraint, unique constraint, table check constraint, or column check constaint was selected in the Constraints tab, would have caused SQL Central to crash. This has been fixed. ================(Build #2265 - Engineering Case #797151)================ Copying and pasting, or dragging and dropping, an ARTICLE or TABLE onto a PUBLICATION could have caused SQL Central to crash. This has been fixed. ================(Build #2236 - Engineering Case #794675)================ If a users only connection to a server was a connection to the utility database, then attempting to open the server property sheet would have failed with a permission denied error. Now the property sheet opens but only the General page is shown. ================(Build #2236 - Engineering Case #794673)================ If a server was running the utility database along with other databases, and a user was connected to the utility database only, then attempting to work with another database on the same server could have resulted in a permission denied error. Specifically, an error would occur if a database was selected in the tree or its property sheet was opened. This has been fixed. ================(Build #2236 - Engineering Case #794671)================ If attempting to connect to a database via a Connection Profile failed, then SQL Central could have crashed. This has been fixed. ================(Build #2165 - Engineering Case #787707)================ When editing numeric table values in Interactive SQL or SQL Central, the value typed could have been subject to unexpected rounding errors before the value was sent to the database. This problem would occur if the value could not be exactly represented as a 64-bit IEEE 754 floating point number. It has now been fixed. ================(Build #2161 - Engineering Case #787354)================ Sybase Central can generate documentation for objects in a SQL Anywhere database. After the files are generated, the user is asked if they want to view the resulting HTML files. On Mac OS X systems, electing to generate the HTML files into a directory whose path included a non-ASCII character would have caused the browser not to open, and Sybase Central would report an internal error. This has been fixed so that now the browser opens correctly. Note that the problem was limited to opening the web browser. The HTML files are generated without issue. ================(Build #2154 - Engineering Case #786565)================ It was not possible to set the server’s quitting time on the property sheet’s Options page if the timestamp_format option was set to a non-default value (the default is YYYY-MM-DD HH:NN:SS.SSS). This has been fixed. The property sheet now uses a free-form text field rather than a masked text field. Also, the current time is now shown in the same format as is required for setting the quitting time. ================(Build #2135 - Engineering Case #784579)================ If a breakpoint was deleted from the Breakpoints window when the breakpoint's stored procedure was not selected, the breakpoint was still shown when the procedure was subsequently selected. This has been fixed. ================(Build #2135 - Engineering Case #784559)================ In the Create Database wizard, when starting a new local server to create the database, the server name would have defaulted to the database file name. This could result in an invalid server name or a server name that wasn’t recommended. For example, if the database file name contained characters other than 7-bit ASCII. This has been fixed. Now if the database file name isn’t a valid or recommended server name, then the wizard generates a random server name. ================(Build #2130 - Engineering Case #783998)================ The Database Overview tab listed port numbers incorrectly for IPV6 addresses. This has been fixed. ================(Build #2130 - Engineering Case #783875)================ In the ‘Fragmentation’ tab of a database, selecting the ‘Tables’ folder in the tree, clicking the Back button in the toolbar to go back to the ‘Fragmentation’ tab, and then the clicking ‘Checkpoint & Refresh", would have selected the ‘Folders’ tab instead of staying on the ‘Fragmentation’ tab. This has been fixed. ================(Build #2121 - Engineering Case #783248)================ In the Table Editor, attempting to include a column with an approximate numeric data type (a float or double) as part of the table's primary key would show a dialog discouraging this practice; however, regardless of whether OK or Cancel was clicked in the dialog, the change would have been reverted and the primary key check box would remain unchecked. This has been fixed. Now the check box is checked if you click OK in the dialog. However, including a column with an approximate numeric data type as part of a table's primary key is still discouraged. This is because SQL Anywhere cannot enforce a referential integrity constraint for values that cannot be represented exactly by an approximate numeric data type. ================(Build #2118 - Engineering Case #783071)================ In Sybase Central, trying to save a zero-length VARBINARY value from a table's "Data" tab, would have caused Sybase Central to crash. This has been fixed. Note, the same problem could also have manifest itself in the Interactive SQL utility. ================(Build #2116 - Engineering Case #782826)================ Editing a SQL Anywhere connection profile that contained only a userid and password could have inadvertently set the "action" to "Connect with an ODBC data source" rather than "Connect to a running database on this computer". This has been fixed. ================(Build #2111 - Engineering Case #782250)================ Rows can be deleted from a table in Sybase Central even if the table does not include a primary key. If deleting a row actually causes more than one row to be deleted, Sybase Central displays a message which prompts the user to refresh the display to get an accurate display of the rows. If the display was not refreshed, but instead the last row in the table was selected and the Delete key was pressed, Sybase Central would have crashed while attempting to delete the row. This has been fixed; pressing the Delete key in this case now does nothing. ================(Build #2106 - Engineering Case #781759)================ The "Compare Database Schemas" window has an "Open in Interactive SQL" button which is supposed to open Interactive SQL and put the SQL script into its "SQL Statements" pane. On Mac OS X computers, Interactive SQL would also have immediately executed the statements. Now, it does not execute the statements unless the user tells Interactive SQL to run the statements. ================(Build #2093 - Engineering Case #780457)================ When comparing databases on Mac OS X systems, an unknown error would have occurred if either of the databases contained a Java class, JAR file, or external environment object. This has been fixed. ================(Build #2084 - Engineering Case #779251)================ After running the Validate Database wizard, Sybase Central’s connection would have held a schema lock on each of the tables and materialized views in the database. This has been fixed. ================(Build #2076 - Engineering Case #778458)================ Creating a remote server from within the Migrate Database wizard could have crashed if the database server had previously been selected in the tree and selected View -> Refresh. This has been fixed. ================(Build #2076 - Engineering Case #778449)================ If the Tasks list was showing in the left pane, clicking a task item would have caused Sybase Central to crash if the Data tab was showing and one or more rows were selected in the Data tab’s table. This has been fixed. ================(Build #2075 - Engineering Case #778293)================ Selecting a database’s Overview tab would have caused an error if the database used the 1254TRK collation. This has been fixed. ================(Build #2074 - Engineering Case #777960)================ Sybase Central could have crashed if table data was edited and the table did not have a primary key, and if the row being modified had the same values as some other row in the table. This has been fixed. ================(Build #2018 - Engineering Case #771379)================ The web server port was not displayed in the Overview tab. This has been fixed ================(Build #2018 - Engineering Case #771364)================ When searching in SQL Central if the server was shut down, or the connection to the database was dropped from another application, then SQL Central would have shown multiple error dialogs, each of which needed to be closed before SQL Central could be used again. This has been fixed. ================(Build #2015 - Engineering Case #770952)================ The following issues related to the Database Documentation wizard have been fixed: On Windows systems: Sybase Central could have become unresponsive for about a minute when viewing the generated documentation, and the directory was specified from the root of the drive but did not include the drive letter. On non-Windows systems: Generating the documentation to a directory with a space in its name, and then attempting to open the documentation would have failed. ================(Build #2015 - Engineering Case #770843)================ when comparing databases, SQL Central would have appeared to hang if the database contained a large procedure definition. This has been fixed. ================(Build #2000 - Engineering Case #768513)================ When defining MobiLink server Command Lines, two of the available options are –sl java and –sl dnet. These are used to pass startup parameters to the Java VM or .NET CLR used to process Java or .NET scripts. The value entered for these options in the MobiLink Server Command Line Properties dialog were automatically enclosed in quotes when generating a MobiLink server command line (eg –sl java “-c c:\myjava” ). This was incorrect, the MobiLink server expects these option values to be enclosed in brackets or braces. This has been fixed so that if the value entered is already surrounded by brackets or braces, nothing will be added to it, otherwise the value will be enclosed in brackets. (ex –sl java( -c c:\myjava) ================(Build #1993 - Engineering Case #768875)================ Attempting to create a maintenance plan could have failed if the Allow_nulls_by_default database option was set to ‘Off’ the first time the Maintenance Plans folder was selected for that database in Sybase Central. This has been fixed. ================(Build #1993 - Engineering Case #768652)================ Sybase Central would have failed assertions, or behaved incorrectly, when running on a machine with a Turkish locale, regardless of the database’s collation or the server machine’s locale. This has been fixed. ================(Build #1896 - Engineering Case #762532)================ The Overview page did not show the arbiter and mirror server names. This has been corrected. ================(Build #1894 - Engineering Case #762605)================ The "Find/Replace" window for the syntax highlighting editor could have hung when searching up for a whole word if the search text appeared in the editor, but not as a separate word, and if the caret position was initially after the search text. This has been fixed. This prolem also affects the Interactive SQL utility. It has been present since version 9.0.2, possibly earlier. ================(Build #1886 - Engineering Case #762171)================ When hovering between column headings in the right pane, the cursor may not have been drawn as a resize cursor. This has been fixed. ================(Build #1885 - Engineering Case #762046)================ The SQL Anywhere Plug-in tooltip for the Connect window had its text truncated. This has been fixed. ================(Build #1885 - Engineering Case #762038)================ When reviewing recommendations from a tracing session, and the database had been shut down, a java.lang.IndexOutOfBoundsException could have occurred. This has been fixed. ================(Build #1845 - Engineering Case #759373)================ In the Views folder, clicking the “Last Refresh Time” or “Known Stale Time” column headings to sort the views by one of these columns would have produced incorrect results. This has been fixed. ================(Build #1838 - Engineering Case #758812)================ The property sheet for a table column would not have allowed changing the column’s “Compress values” or “Maintain BLOB indexes for large values” settings if the column’s type was a domain, even if the domain’s base type supported these settings. This has been fixed. ================(Build #1827 - Engineering Case #757937)================ Attempting to save the SQL for an event that contained whitespace before the BEGIN keyword would have resulted in a syntax error. This has been fixed. ================(Build #1815 - Engineering Case #756707)================ Sybase Central could have crashed if it was running and something was done to change the Windows desktop theme. This same problem could have been encountered when using a Remote Desktop Connection with a low-bandwidth connection; in that configuration, Remote Desktop may automatically change the desktop theme to satisfy the bandwidth limitations. This issue was partially fixed by the changes for Engineering case 752610, but this change goes beyond it by fixing the following problems as well: - On the Search panel, the "Search" button enabling logic and the button's clicking logic would stop working after the look-and-feel changed. - Sybase Central could have crashed if any of the following panels/windows were open when the look and feel were closed: 1. "Results" on the "Search" pane 2. The list of plug-in in the "Sybase Central Plug-ins" window. 3. The connection profiles window 4. The About window 5. The Disconnect window - The background color of items in details panels would have been painted gray (rather than white) after switching look-and-feel. - Drop-down toolbar buttons (e.g. "Tools") did not draw their arrow, and did not open when clicked. ================(Build #1762 - Engineering Case #753783)================ Sybase Central could not re-enable a secured feature for a version 12 database running on a version 16 server. Attempting to do so would have resulted in a “Procedure 'sp_use_secure_feature_key' not found” error. This has been fixed. ================(Build #1760 - Engineering Case #753782)================ Opening the property sheet for an index on a materialized view could have crashed Sybase Central. This has been fixed. ================(Build #1755 - Engineering Case #752610)================ Sybase Central could have crashed if it was running and something was done to change the Windows desktop theme. This same problem could also have been encountered when using a Remote Desktop Connection with a low-bandwidth connection; in this configuration, Remote Desktop may automatically change the desktop theme to satisfy the bandwidth limitations. This problem has now been fixed. ================(Build #1740 - Engineering Case #751566)================ The antialiasing algorithm used for SQL editors (including the SQL Anywhere auditing viewer, and the MobiLink server log file viewer) did not render certain fonts optimally. Chinese fonts on RedFlag Linux and small fonts generally were poorly drawn (or were not drawn at all). This has been fixed. ================(Build #1683 - Engineering Case #747714)================ Trying to save the Application Profiling recommendations could have failed. This has been fixed. ================(Build #1623 - Engineering Case #744044)================ When testing a connection to a MySQL remote server, Sybase Central could sometimes have reported that the connection failed, when in fact it had succeeded. This has been fixed. ================(Build #1614 - Engineering Case #744039)================ It was not possible to connect to a tracing database when profiling using a client-only install. This has been fixed. Now, only opening an analysis file is prevented in a client-only install. ================(Build #1608 - Engineering Case #743684)================ The Extract Database wizard would have reported that the SELECT ANY TABLE system privilege was required, when in fact the SYS_REPLICATION_ADMIN_ROLE role is actually what is required. This has been fixed. ================(Build #1607 - Engineering Case #743587)================ When duplicating a user via copy-and-paste or drag-and-drop, the password for the new user was copied from the original user. Now, Sybase Central prompts for the new user’s password. ================(Build #1606 - Engineering Case #743243)================ Users could not use Sybase Central to view the contents of a view owned by SYS unless they had exercise rights on the SELECT ANY TABLE system privilege. This has now been corrected. ================(Build #1585 - Engineering Case #735368)================ Selecting and deleting multiple table privileges could have crashed Sybase Central if there were corresponding column privileges. This has been fixed. ================(Build #1559 - Engineering Case #740488)================ Attempting to create a table column or domain with an empty string as the default value would have caused the object to be created with no default value. This has been fixed. ================(Build #1538 - Engineering Case #739171)================ Sybase Central could have become unresponsive for a few moments when a stored procedure was saved, if the text completer was open. The problem occurred if the start of the ALTER PROCEDURE statement was modified (perhaps by removing the quotation marks from the owner or procedure name), then clicking the "Save" toolbar button. This has been fixed. ================(Build #1530 - Engineering Case #738707)================ When a mirror or copy node was transitioning between pulling log pages and having the primary or parent pushing log pages, it was possible for the mirror or copy node to have failed assertions 112011 or 100927. This has been fixed. ================(Build #1523 - Engineering Case #738312)================ When comparing databases, an error could have occurred if either database contained any of the following: 1. A table column with a default value containing nested parentheses 2. A table column with a computed value spanning multiple lines and/or containing nested parentheses 3. A table or column check constraint definition spanning multiple lines and/or containing nested parentheses These issues have been fixed. ================(Build #1523 - Engineering Case #738302)================ When comparing databases, Sybase Central could have run out of memory if the source for a procedure, function, view, materialized view or trigger contained a large comment within the “CREATE <object-type> <object-owner>.<object-name>” prefix. This has been fixed. ================(Build #1513 - Engineering Case #737533)================ When a secured feature exception occurred, and Sybase Central prompted for a secure feature key, it asked for an authorization key only. This has been corrected so that it now prompts for both the key name and its authorization key. ================(Build #1498 - Engineering Case #736356)================ Sybase Central could have crashed after changing the definition of a view or materialized view. This has been fixed. ================(Build #1489 - Engineering Case #735605)================ Opening the Text Configuration Objects folder or the Create Text Index wizard could have resulted in the error “Permission denied: you do not have permission to change remarks for "default_char"”. This would only have occurred if the default text configuration objects (SYS.default_char and SYS.default_nchar) didn’t already exist in the database and the user did not have permission to set a comment on a text configuration object. This has been fixed. ================(Build #1488 - Engineering Case #735599)================ When duplicating a materialized view, any change to which dbspace it was located would have been ignored. Now the choice of dbspace is no longer given, and the copied materialized view is created in the same dbspace as the original. ================(Build #1488 - Engineering Case #735598)================ When using the wizard to create a table, and selecting a dbspace that the user didn’t have permissions on, the error wasn’t reported until attempting to save the table, at which point the dbspace couldn't be changed because the wizard had already closed. This has been fixed. Now the error is reported immediately when choosing the dbspace in the wizard. Similarly, when using a wizard to create a materialized view, index or text index, if a dbspace was selected that the user didn’t have permissions on, then the error wasn’t reported until the Finish button was clicked. This has been fixed. Now the error is reported immediately when choosing the dbspace n the wizard. ================(Build #1488 - Engineering Case #735595)================ Attempting to connect to a database running on a version 7 or earlier server (for the purposes of unloading/reloading the database into a new version 16 database), would have caused Sybase Central to crash. This has been fixed. ================(Build #1485 - Engineering Case #735367)================ If an application used variables in the USING clause of a remote server, or the AT clause of a proxy table or procedure, then the server would have leaked memory. This problem has now been fixed. ================(Build #1484 - Engineering Case #735254)================ Selecting View -> Refresh Folder or View -> Refresh All while viewing object privileges would have caused Sybase Central to crash if a row was selected in the object privilege editor. This has been fixed. Note that the problem did not occur if instead the F5 key was used to perform the refresh. ================(Build #1484 - Engineering Case #735151)================ Attempting to revoke all object privileges on a given table or view, from a given user or role, could have resulted in a “permission denied” error, even when the user did in fact have permission to revoke the granted privileges. This has been fixed. ================(Build #1480 - Engineering Case #734989)================ In a version 12 or later database, attempting to create a Synchronization Subscription by dragging a publication and dropping it on a MobiLink user (or vice versa), or copying a publication and pasting it to a MobiLink user (or vice versa), would have caused Sybase Central to report an error while attempting to create the subscription. This has been fixed. ================(Build #1478 - Engineering Case #734626)================ Attempting to switch between Design, Debug or Application Profiling modes while editing a row in the Data tab for a table or view, would have caused Sybase Central to crash. This has been fixed. ================(Build #1478 - Engineering Case #731386)================ When comparing databases, if the parser encountered a COMMENT ON statement for which the corresponding CREATE <object-type> statement could not be found, then an “Unknown error” would have been reported. Now the COMMENT ON statement is reported in the SQL Scripts tab as an unhandled statement. ================(Build #1477 - Engineering Case #734591)================ When using the Foreign Key Wizard to create a foreign key, and choosing to add one or more columns to the foreign table for a foreign key that allowed nulls, the foreign key would actually have prohibited nulls if the foreign table was empty. This has been fixed. In addition, the Foreign Key wizard did not display the SQL to create the columns on the last page of the wizard. This has also been fixed. ================(Build #1477 - Engineering Case #734580)================ The Schedules tab for an event could have shown out-of-date information after a schedule was modified, for example, via its property sheet. This has been fixed. ================(Build #1443 - Engineering Case #729777)================ The "Overview" panel for a database shows the mirrored state of a database. If the deprecated command line option "-xp{partner=...;arbiter=...}" were specified, the mirroring configuration was not shown on the "Overview" panel. This has been corrected so that now it is. ================(Build #1437 - Engineering Case #731504)================ When comparing databases, if the source or definition for a procedure, function, view, materialized view, trigger or event contained a multi-line comment with a line that contained only the text “go”, then Sybase Central would have reported that it had encountered an unhandled statements. This has been fixed. ================(Build #1432 - Engineering Case #731021)================ On the Grantees, Roles or System Privileges tabs for a user, role or system privilege, the Grantor was not shown until the changes were saved to the database. This has been fixed. ================(Build #1432 - Engineering Case #731017)================ On the Grantees, Roles or System Privileges tabs for a user, role or system privilege, if the New Grantees/Granted Roles/Granted System Privileges dialog was opened and an object was selected for which there was already a row in the privilege editor, then no privileges would have been granted. This has been fixed. ================(Build #1432 - Engineering Case #727676)================ The MobiLink Log File Viewer in Sybase Central was unable to read log files that contained lines which were longer than 8192 bytes. Now, lines up to 65536 bytes are supported. Note, line length in log files can become very long when the MobiLink server "-vr" option (display column values) is used. ================(Build #1431 - Engineering Case #730907)================ When selecting the Data tab for a table or view, clicking Cancel in the “Loading Data” dialog then attempting to fetch the Data didn't always cancel the loading. This has now been corrected. ================(Build #1431 - Engineering Case #730896)================ On the Fragmentation tab for a database, selecting a table or index in the list and then attempting to change the selection while the previous selection’s bitmap was being loaded, may have caused Sybase Central to hang until the loading completed. This has been fixed. Now the loading of the previous selection’s bitmap is canceled and the loading of the new selection’s bitmap is started. ================(Build #1431 - Engineering Case #730890)================ When displaying the last state change time for LDAP servers, the values displayed in the LDAP Servers folder did not include a time zone, while the values displayed on the LDAP server property sheet included the server’s time zone, which was incorrect. Now the values clearly indicate that LDAP server last state change times are in Coordinated Universal Time (UTC). ================(Build #1430 - Engineering Case #730893)================ On the Create Database wizard’s “Connect to the Database” page, the server name shown would have been incorrect if a new local server was started, then the database creation was cancelled and the database file name was changed. This has been fixed. ================(Build #1430 - Engineering Case #730762)================ In the Breakpoints window, selecting an existing breakpoint to edit may not have selected the right server name. This has been fixed. ================(Build #1426 - Engineering Case #730475)================ When viewing a NULL binary value in the Long Value window, the Save button was incorrectly enabled. If the Save button was clicked, Interactive SQL would have crashed. This has been fixed;. Now, the button is not enabled if the value is NULL. ================(Build #1423 - Engineering Case #730250)================ Attempting to delete one or more objects by selecting the objects, pressing the Delete key, and then quickly pressing the Y key to confirm the deletion before the confirm dialog was displayed, could have caused Sybase Central to crash. This has been fixed. ================(Build #1423 - Engineering Case #730248)================ When using the table editor to change a column’s data type, Sybase Central could have crashed if a domain had previously been created in the same Sybase Central session. This has been fixed. ================(Build #1418 - Engineering Case #730251)================ The Upgrade Database wizard could not have been used to upgrade a database unless the user was granted exercise rights on the SERVER OPERATOR system privilege, even though the wizard didn’t make use of this system privilege. Now the wizard no longer requires the SERVER OPERATOR system privilege. ================(Build #1389 - Engineering Case #727658)================ The Deploy wizard in the MobiLink Plug-in generates a summary file that shows all the choices made during the deploy process. Each line in this file was terminated with a \n character. Some Windows editors do not recognize this a line terminator, they expect \r\n. The wizard now terminates the lines of the summary file with the appropriate line terminator for the operating system on which it is running. ================(Build #1382 - Engineering Case #727045)================ If the case of the procedure owner and/or name differed between the procedure’s definition statement and its source statement in the reload script generated by dbunload, then attempting to compare the databases would have thrown an assertion. For example: create procedure usr.FullName( ...; COMMENT TO PRESERVE FORMAT ON PROCEDURE "USR"."FullName" IS ...; The same problem would have occurred for functions, triggers, views, and events. These issues have been fixed.

SQL Anywhere - Utilities

================(Build #2741 - Engineering Case #816795)================ If any of the log scanning tools (dbmlsync, dbremote, dbtran) had to scan a large portion of the transaction log (roughly 1GB but varies slightly by tool) and reached the maximum cache size that could be kept in memory, the log scanning code would still spend significant effort attempting to grow the cache only to discover that no additional cache could be allocated. The algorithm is now more efficient when the maximum cache size has been reached. ================(Build #2619 - Engineering Case #812412)================ ransaction Log utility (dblog) fails to check if the database version is newer than it can handle. It uses the DBChangeLogNam dbtools library function to update the database log file entry. For example, the version 16 DBChangeLogName function will attempt to operate on a version 17 database, possibly with erroneous results. Also, the version 16 DBCreatedVersion function reports "16" for a version 17 database. It fails to determine that the database is actually newer. These problems have been fixed. Now DBCreatedVersion will return VERSION_UNKNOWN in the created_version field for a database store format that is newer than the version of the library function code. Possible return values for version 17 DBCreatedVersion include VERSION_UNKNOWN, VERSION_17, VERSION_16, VERSION_12, and so on. Possible return values for version 16 DBCreatedVersion include VERSION_UNKNOWN, VERSION_16, VERSION_12, and so on. These constants are documented in the dbtools.h header file. The Transaction Log utility (dblog) will return an error message for a store format that is newer than it can handle. In such cases, it will return a message similar to the following: dblog -t newlog test17.db SQL Anywhere Transaction Log Utility Version 16.0.0.2618 Unable to open database file "test17.db" – test17.db was created by a different version of the software. You must rebuild this database to use it with this version of SQL Anywhere. In the example, dblog is from version 16 and the database was created with or upgraded to version 17. ================(Build #2338 - Engineering Case #800153)================ The dbisql tool did not return table constraint information by DESCRIBE tablename in case-sensitive databases if the table name was specified in a different case. This has been fixed. ================(Build #2337 - Engineering Case #801963)================ On Windows systems, a custom SQL Anywhere install using the MSI file created by the Deployment Wizard would have failed to create some registry entries. In particular, the following entries were not set for the indicated versions and bitness. Version 16.0 64-bit HKEY_LOCAL_MACHINE\SOFTWARE\SAP\SQL Anywhere\16.0 Version 17.0 64-bit HKEY_LOCAL_MACHINE\SOFTWARE\SAP\SQL Anywhere\17.0 When one of the above registry entries was missing, it may have lead to problems locating the correct version of software components. The procedure for creating these entries manually is described here: http://dcx.sap.com/index.html#sqla170/en/html/815fecef6ce21014b8cbe79cfc3ef3a3.html 32-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY <version>.0 32-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY <version>.0 Admin 64-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY64 <version>.0 64-bit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\SQLANY64 <version>.0 Admin where <version> is 12, 16, or 17. When the above registry entries are missing, it is not possible to see the message text when using the Windows Event Viewer to examine event log entries created by the database server and other SQL Anywhere components. The procedure for creating these entries manually is documented here: http://dcx.sap.com/index.html#sqla170/en/html/816136106ce21014b9a68de8836cc659.html These problems have now been fixed. ================(Build #2303 - Engineering Case #799782)================ If the Unload utility was run with the -ar option ("rebuild and replace database") when attempting to rebuild an encrypted database from a previous version of SQL Anywhere that had been involved in replication or synchronization, the process could have failed with the error: Unable to open database file "C:\full\path\cons.db" - - C:\full\path\cons.db no database specified even though the database existed at "C:\full\path\cons.db". This has now been fixed. ================(Build #2262 - Engineering Case #796852)================ In the Interactive SQL utility, the SQL editor can show procedure, function, and (spatial) method prototypes in a tooltip for the editor. When an opening parenthesis is typed, the editor communicates with the database to see if the text to the left of the parenthesis is a procedure so that it can compose the prototype for the tooltip. The editor is unresponsive for the time needed for that database check. For slow databases, the editor would have hang for a couple of seconds when typing an opening parenthesis, which made it unusable. The editor configuration dialog has a checkbox, "Show tool tips". The SQL editor would have performed the database check even if this box was cleared. Now, the database check is skipped if the box is cleared. KBA 2306369 https://service.sap.com/sap/support/notes/2306369 ================(Build #2238 - Engineering Case #794961)================ The text completer could have mistaken the keyword "ON" following a table name as a table alias. If the completer was used to fill in the name of a column in that table, the column name wouldn have been prefixed by "on.", which was incorrect. This has been fixed. ================(Build #2238 - Engineering Case #794889)================ The Connect window allows connecting to a SQL Anywhere database using a connection string, and contains a list of recently used connection strings. Passwords in the list of connection strings were not removed when they were saved. This has been corrected so that now they are. ================(Build #2231 - Engineering Case #794821)================ If the Interactive SQL utility (dbisql) was run as a command line program from an SSH shell (or similar), and the SSH connection was closed, it was possible for dbisql to then consume 100% of the CPU. This has been fixed. ================(Build #2213 - Engineering Case #803151)================ The Log Translation utility (dbtran) may have crashed if the option to include audit records in the output were specified. This has been fixed. ================(Build #2204 - Engineering Case #791975)================ A review of Java diagnostics revealed an incorrect coding practice that could have caused the Interactive SQL utility to become unresponsive under rare circumstances. This has been fixed. ================(Build #2188 - Engineering Case #790326)================ For builds 16.0.0 2090 to 16.0.0 2187the "Check For Updates" was failing. This has now been corrected. ================(Build #2179 - Engineering Case #789262)================ If multiple rows were selected in a result table, pressing the Delete key or clicking the "Delete Row" context menu deleted only the first selected row, rather than all of the selected rows. This has been fixed so that all of the selected rows are now deleted. ================(Build #2175 - Engineering Case #788698)================ Changes for Engineering case 768658 prevented the Interactive SQL utility from committing on shutdown (or when disconnecting) when connected to any type of database other than SQL Anywhere or SAP IQ. This has been fixed. ================(Build #2167 - Engineering Case #787888)================ Adding a row to a table from the "Results" panel with a UNIQUEIDENTIFIER column would have caused Interactive SQL to crash. This has been fixed. This problem only affected new rows added from the scrolling table component in the "Results" panel. Editing an existing row was fine. Executing an explicit INSERT statement was also fine. This bug also affected the "Data" tab for tables in SQL Central. ================(Build #2163 - Engineering Case #787530)================ An XML value can be viewd from a result set in its own window. That window contains a tab called "XML" which contains a "Format" button. Clicking the button formats the XML to make it more readable. f the column value included a self-closing element which contained whitespace within the tag, it was not recognized as a self-closing element, and all subsequent indenting was wrong. For example, "<e><e /></e><f>Test</f>" should be formatted <e> <e /> </e> <f>Test</f> but was incorrectly formatted <e> <e /></e> <f>Test</f> This has been fixed. ================(Build #2149 - Engineering Case #786259)================ XML values can be displayed in their own window by double-clicking them. That window contains a number of tabs, one of which is "XML Outline", which renders the XML value as a tree. On non-Windows computers, clicking on an expandable node in the tree could have expanded the wrong node, or could have done nothing. This has been fixed. ================(Build #2149 - Engineering Case #786243)================ It was possible for the Interactive SQL utility to have reported an out-of-memory error in the Import wizard when importing data which contained very long column values. This has been fixed. ================(Build #2138 - Engineering Case #784929)================ If the Broadcast Repeater utility used the -x option was used to stop an existing dbns, it would work (the first dbns would shut down), but the second dbns would remain running. This has been fixed. ================(Build #2136 - Engineering Case #784723)================ The Interactive SQL utility could have reported an internal error if the Query Editor was opened after losing the connection to a SQL Anywhere database. This has been fixed. ================(Build #2136 - Engineering Case #784649)================ When exporting data to an ASE database, the "Owner" combobox on the Export Wizard page where a table name is specified could have contained a given owner name many times. This has been corrected so that now the name appears only once. ================(Build #2115 - Engineering Case #782602)================ The dbisqlc utility returned an incorrect result row with null values if a query contained an outer join and the null-supplying side was a procedure call that returned no rows. This has been fixed. ================(Build #2112 - Engineering Case #782428)================ The Interactive SQL utility could have crashed with a NullPointerException when driven by Squish, a UI testing program. This exception has not been seen without Squish. This has now been fixed. ================(Build #2111 - Engineering Case #782242)================ On non-Windows computers with multiple monitors, popup menus (typically opened by right-clicking on a component) could have appeared on the wrong monitor if the Interactive SQL utility or SQL Central window was not on the primary monitor. This has been fixed. ================(Build #2108 - Engineering Case #781917)================ When editing null table values with the Interactive SQL utility on Mac OS X, the first character typed was unexpectedly selected, which means that the second character typed replaced the first character. This has been fixed so that the first character is no longer selected. ================(Build #2106 - Engineering Case #777179)================ When using the SQL Anywhere, MobiLink, or UltraLite, utilities with multi-byte characters in connection strings or on the command-line, there may have be an unexpected “Parse Error”. This has been fixed. ================(Build #2104 - Engineering Case #781693)================ The version number on x509 certificates generated by createcert was 1, but the certificate contained extensions that were only available in version 3. One notable side-effect of this was that importing certificates into a Java keystore for the odata server or other Java application failed. This was a result of the conversion from Certicom to OpenSSL, and has been fixed. ================(Build #2102 - Engineering Case #781475)================ A CONNECT USING statement could have failed to connect to a cloud (SQL Anywhere on-demand edition) database if the user was already connected to the database and the connection string did not include a server name. This has been fixed. ================(Build #2100 - Engineering Case #781340)================ Under rare circumstances, an incremental backup executed on the client could have incorrectly reported success while having copied an incomplete transaction log. This has been fixed. A way to check if the problem has occurred is to validate the backup. File sizes between the database transaction log and the backup log can also be compared. ================(Build #2098 - Engineering Case #780898)================ The OUTPUT statement could have written results to the wrong file if all of the following were true: - DBISQL was configured to show results from all statements. That was an option (but not the default) in version 16 and earlier. - The last two (or more) result-set-generating SQL statements were the same. - DBISQL was run as a windowed application. For example: create table t ( c int ); insert into t values( 1 ); select * from t; output to 'x.txt'; insert into t values( 2 ); select * from t; // Same as the previous SELECT statement output to 'x.txt' append; Before the fix, the first OUTPUT statement wrote 'x.txt', while the second OUTPUT statement wrote 'x-1.txt' and 'x-2.txt'. The string 'x-1.txt' was a copy of 'x.txt'. Now, the second OUTPUT correctly appends to 'x.txt'. ================(Build #2096 - Engineering Case #780701)================ The Import Wizard reported an error when attempting to import a shape file into a database that used the Turkish collation 1254TRK. This has been fixed. ================(Build #2094 - Engineering Case #780471)================ The Plan Viewer would have failed to get the plan for a statement if the statement contained a literal string which contained a semicolon. This has been fixed. ================(Build #2084 - Engineering Case #779247)================ The Unload utility (dbunload) could have crashed or reported an error when attempting to unload a database with a Turkish charset. This problem has now been fixed. ================(Build #2078 - Engineering Case #778719)================ When connected to a database with a Turkish collation (1254TRK), the Index Consultant would have always failed with the message "Table 'sysphysidx' not found". This has been corrected. ================(Build #2078 - Engineering Case #778718)================ When connecting to a database with a Turkish collation (1254TRK), the post login procedure was not being called, given that one was defined. This has been corrected. ================(Build #2075 - Engineering Case #778231)================ The Extraction utility (dbxtract) was incorrectly extracting the consolidated database's mirror server definitions (if they existed). This has been fixed so that mirror server definitions are no longer extracted. ================(Build #2066 - Engineering Case #777043)================ If the Backup utility (dbbackup) was used to back up a database that had no transaction log, and the -n and -r switches were used, dbbackup would crash. This has been fixed. ================(Build #2065 - Engineering Case #776897)================ The Interactive SQL utiliy (dbisql) could have reported an internal error (OutOfMemoryException) when copying large result sets when the results were displayed as text. Now, a more user-friendly error message is displayed. If dbisql runs out of memory when copying results, there are a couple of things that can be done: 1. When running the 32-bit version of dbisql, run the 64-bit version instead. It allows for a larger heap, and is less likely to run out of memory. 2. Export the data using an OUTPUT statement, or the "Export Wizard", rather than copying results to the clipboard. ================(Build #2056 - Engineering Case #665981)================ The Index Consultant will no longer print debugging messages to the console window. ================(Build #2056 - Engineering Case #665980)================ The Index Consultant could have failed if the tables involved had text indexes. This has been fixed. ================(Build #2053 - Engineering Case #775805)================ If a Data Source was modified in the registry to contain the 'Server' connection parameter rather than 'ServerName', that parameter would have been ignored. This has been fixed. ================(Build #2048 - Engineering Case #775231)================ If the Service utility (dbsvc) was used to create a service with a space in the service name (eg. dbsvc -w “My service name”), the service would have been created but it would not be able to start. This has been fixed. ================(Build #2047 - Engineering Case #774671)================ The Interactive SQL utility would have failed to set its exit code to a non-zero value in a number of cases: - If no connection parameters were given, but a statement was given. - If a READ statement or the name of a SQL file was given on the command line, and the file exists, but the file could not be read for any reason. - If an error was encountered while reading the results of a statement. This has now been corrected. ================(Build #2046 - Engineering Case #773279)================ Turning off The Interactive SQL utility's COMMIT_ON_EXIT option had no effect for the session in which it was turned off. This has now been fixed. ================(Build #2022 - Engineering Case #771873)================ If the Interactive SQL utility (dbisql) was run as a command-line program, it was possible for dbisql to report that it could not connect to the database, and then proceed to execute a statement anyway. In this scenario, dbisql would set its return code to 9 (could not connect) which was inconsistent with the fact that it actually executed the statement. For this to have occurred, all of the following would have had to be true: - A valid connection string and a statement must have been specified on the command line, - The database server had to be unreachable when dbisql started - The database server must then have become available as dbisql completeed its startup sequence This has been fixed. If the database server is unavailable when dbisql initially attempts to connect, it will now simply shut down, rather than attempting to execute the given statement. ================(Build #2018 - Engineering Case #771429)================ The Interactive SQL utility would have crashed when using the Import Wizard to import a shape file, and that shape file had an associated .DBF file which contained a DATE, TIME, or TIMESTAMP column. This has been fixed. ================(Build #2016 - Engineering Case #770967)================ The following issues have been fixed: If a post login procedure did not return a result set, the Interactive SQL utility would have crashed. Also, a race condition could have caused the window containing the post login messages to appear on-screen at the same time as the "Connecting to database" status window. When this occurred, neither window could have been closed. ================(Build #2014 - Engineering Case #770747)================ The Interactive SQL utility (dbisql) could have consumed more disk space than was required under certain circumstances. This has now been corrected. ================(Build #2013 - Engineering Case #770576)================ On Red Hat Enterprise Linux 7, the Interactive SQL utility could not check for updates, nor could they open online help if the computer it was running on required a network proxy to reach the internet. This has been fixed so that on non-Windows platform, the software will use the proxy information in the http_proxy environment variable, if set. Note, this problem also affected Sybase Central and the Console utility, which are fixed as well. ================(Build #2012 - Engineering Case #770324)================ The Broadcast Repeater utility (dbns16) did not start. This problem has been fixed. ================(Build #2011 - Engineering Case #770200)================ The unused Batik JAR file JS.JAR has been removed from the list of JAR files that the Interactive SQL utility searches for on startup. ================(Build #2005 - Engineering Case #769879)================ Previously, DBISQL did not commit when disconnecting from a SAP HANA database even if its "Commit on exit or disconnect" option was selected. This has been fixed. ================(Build #1994 - Engineering Case #769065)================ The message that reports an update count would have reported the wrong action (INSERT, UPDATE, DELETE or MERGE) if a number of statements were executed at once, and were not separated by a command delimiter. This has been fixed. ================(Build #1990 - Engineering Case #768514)================ On some RedHat distributions, if the SQL Anywhere Monitor was configured as a service that started automatically when the computer was rebooted, it may have sporadically failed to start. This has been fixed. ================(Build #1989 - Engineering Case #767881)================ When SQL Remote was generating messages, if a change had been made to a column with a numeric or decimal data type, SQL Remote would have failed to add information to the message that would have allowed the receiving side to have performed conflict resolution. This issue has now been fixed. ================(Build #1960 - Engineering Case #766369)================ When running sqlpp on Linux systems with a recent version of glibc, a syntax error could have been reported for perfectly good code. Moreover, the "near" text may have appeared mangled. For example, running the following statements through sqlpp: EXEC SQL BEGIN DECLARE SECTION; EXEC SQL END DECLARE SECTION; would have resulted in the following error: test.sqc(35): Error! E2636 near 'AREARE': Incorrect Embedded SQL syntax This has been fixed. There is no known workaround for this. ================(Build #1926 - Engineering Case #764632)================ When using the Import Wizard to import shape file data into a database, the wizard showed a page which prompted for a table in which to save the data. That page included a component that listed all of the owners in the database. That list inadverently contained system-defined role names. This has been corrected so that these names are no longer in the list. ================(Build #1925 - Engineering Case #764516)================ In the "Connect" dialog for an UltraLite database, if "Tools/Copy Connection String to Clipboard" was clicked, the database encryption key (if set) was copied in clear text to the clipboard. This has been corrected so that the key is now replaced by three asterisks, the same as for the connection password. ================(Build #1916 - Engineering Case #763964)================ The text "SAP Sybase IQ" was incorrectly displayed in the Categories list on the Options window. This has been corrected so that the text "SAP IQ" is now displayed. ================(Build #1909 - Engineering Case #763414)================ The number of messages that could be displayed in the Interactive SQL utility's Messages panel was limited to 1000. This fixed limit has been removed. ================(Build #1903 - Engineering Case #763149)================ If the Interactive SQL utility (dbisql) had been launched from Sybase Central to debug a stored procedure, and then closed while at a breakpoint, dbisql would have crashed. This has been fixed. ================(Build #1900 - Engineering Case #762942)================ The Interactive SQL utility (dbisql) could have crashed if the text completer was opened in poorly formed SQL. In order to crash, the caret would have to follow an unmatched quotation mark. This has been fixed. ================(Build #1896 - Engineering Case #762797)================ The text completer could have crashed when opened in an incomplete or poorly-formed SELECT...FROM clause. This has been fixed. ================(Build #1896 - Engineering Case #762775)================ When connected to an ASE server using the "Generic ODBC Database" option in the "Connect" window, the Interactive SQL title was empty. Now, the text contains the server name, database name, login id and user id (if known). This was the same behavior as version 12, before this bug was introduced. ================(Build #1895 - Engineering Case #762683)================ Importing data from a text file into a SAP HANA database could have failed with the message "Could not create table ... incorrect syntax near "long" ..." This problem would have occurred when importing character data into a new table. This has been fixed. ================(Build #1895 - Engineering Case #762676)================ Importing from a CSV file which contained unquoted values could have failed if the values contained apostrophes or quotation marks in the middle of the value. This has been fixed. ================(Build #1893 - Engineering Case #762535)================ The Interactive SQL utility would have crashed if given a SQL file on the command line that did not exist and if the "on error" option was "continue" or "notify_continue". This has been fixed. ================(Build #1892 - Engineering Case #762531)================ The Create Procedure wizard would have crashed when creating a procedure for a Java external environment. This has been fixed. ================(Build #1861 - Engineering Case #732565)================ MSI installs generated using the Deployment wizard would have always contained the same upgradecode property. This was causing the behavior that installing a newer version of SQL Anywhere would cause the older version to be uninstalled. This has been fixed by changing the upgradecode property to a distinct code for each major revision of SQL Anywhere. ================(Build #1859 - Engineering Case #760339)================ A number of issues related to the "Save as ODBC Data Source" dialog when running 64-bit software on 64-bit Windows, have been corrected: -- The dialog appeared to give the option of creating both 32 and 64-bit versions of a user data source. This option applies only to creating system DSNs, and is now disabled. -- When trying to create both a 32 and 64 bit system DSN, but the 32-bit could not be created, the error message was irrelevant. This has been fixed; the error message now says why the data source could not be created. -- Data sources could be created with non-canonical forms of the connection parameter names. This would have prevented the Data Source utility (dbdsn) from listing the data source completely. This has been fixed. -- Creating a system data source could have caused up to three elevation prompts if a 32-bit data source was also created. Now, there is at most one prompt. -- The dialog allowed creation of a user data source, even if there was already a system data source of the same name. Similarly, it allowed creation of a system data source with the same name as an existing user data source. This was very poor practice and could have lead to unexpected results if a user didn't realize that user data sources take precedence, and so the practice is now simply disallowed. ================(Build #1843 - Engineering Case #759139)================ The "custom" source control option -- the one in which a command line is entered to check in / check out / etc. -- was broken. Attempting to enable it resulted in an error message "Interactive SQL could not load the interface library for your source control system." This has now been fixed. ================(Build #1832 - Engineering Case #757529)================ A security issue with the Unload utility (dbunload) has been corrected. ================(Build #1815 - Engineering Case #756786)================ In the Console utility (dbconsole), if the Connections panel was configured to show the last reported statement, and the statement was longer than 255 characters, a "Right truncation of string data" error would have been reported. This has been fixed. ================(Build #1796 - Engineering Case #755529)================ Creating a certificate with an expiry date beyond 2050 would have resulted in a certificate that expired 100 years before the expected expiry date. This has been fixed. ================(Build #1795 - Engineering Case #755372)================ If the Certificate Creation utility (createcert) was used without the -3des option, a compatibility warning was displayed warning that the resulting certificate would not be compatible with older versions of the software. The build numbers shown in this message were incorrect. This has been corrected. ================(Build #1794 - Engineering Case #755094)================ The Deployment Wizard was not correctly populating the 'Environment' or 'Registry' tables in the MSI package, meaning that ‘PATH', 'SQLANY16' and 'SOFTWARE\Sybase\SQL Anywhere\16.0' were not being set inside SQL Anywhere 16 deployment MSIs. This has been fixed. ================(Build #1792 - Engineering Case #754973)================ If certain errors occurred during a backup, the Backup utility (dbbackup) would have reported “100% complete” before displaying the error. This has been fixed. ================(Build #1788 - Engineering Case #754733)================ Interactive SQL could have appended a spurious character to the command line if it was passed a command file which was hidden using the File Hiding utility (dbfhide) . This has been fixed. ================(Build #1788 - Engineering Case #754638)================ The Service utility for Windows (dbsvc) was treating leading @ in passwords as an intend to expand the argument via a file or environment variable. This is a consistent behavior across many SQL Anywhere command line tools. However, there is no general escape syntax for leading @ in command line arguments. Users can work around the issue by avoiding user defined elements with leading @, but is not acceptable for user accounts with established or generated passwords. A fix was made to the Service utility specifically so that it will fall back to take the entire password argument following –p literally if the expansion failed. This change does not apply to other tools. ================(Build #1766 - Engineering Case #753365)================ Typing in to the SQL field of the Spatial Viewer window could have caused an internal error if the database connection had been closed from the main DBISQL window. This has been fixed. ================(Build #1759 - Engineering Case #752866)================ When displaying results as text, column values might have been misaligned if a column to the left contained text with full-width Asian characters. This has been fixed. ================(Build #1753 - Engineering Case #733912)================ The File Hiding utility (dbfhide) would have crashed if an input file was larger than 65528 (64k-8) bytes. This has been fixed so that an error is now displayed if the file is too large. ================(Build #1748 - Engineering Case #752033)================ An attempt to use the Extraction utility (dbxtract) on a database running in the cloud would have failed with an invalid login_mode option error. This has been fixed. ================(Build #1737 - Engineering Case #751283)================ If dbdsn was used with the -or switch to create an Oracle DSN, dbdsn -g would not have listed the Oracle-specific parameters. This has been fixed. ================(Build #1732 - Engineering Case #751019)================ When the Import and Export wizards offered a list of owners for a SQL Anywhere or IQ database, role names were consistently excluded. This has been corrected so that role names are now included.. ================(Build #1731 - Engineering Case #750919)================ Opening online help (DCX) from the graphical administration tools may have failed with the error message "Web page is not accessible". This would have happened under the following conditions: - Interactive SQL 12 was being run and the "Help" button was clicked on the "Connect" window the first time it was opened, and the network used a proxy to connect to the internet. - Sybase Central 12 or later, or Interactive SQL 12 or later, was being run and the fast launcher was turned on, and the network used a proxy to connect to the internet. These problems have been fixed. ================(Build #1712 - Engineering Case #749610)================ When the Interactive SQL utility was run on Japanese or Chinese Linux, the list of categories in the "Options" window was sized so narrow that the names of the categories were not displayed at all. This has now been corrected. Note, the same sizing issue also occurred with the “Options” window of the SQL Anywhere Console utility. ================(Build #1705 - Engineering Case #749074)================ If the Data Source utility (dbdsn) was used with the -or command line option to create a DSN that used the SQL Anywhere Oracle driver, the driver name would have been incorrect. The driver name should be “SQL Anywhere <version> - Oracle” rather than “iAnywhere Solutions <version> - Oracle". This has been fixed. ================(Build #1697 - Engineering Case #748557)================ Opening the text completer when connected to an ASE database could have caused the Interactive SQL utility to crash. This has been fixed. ================(Build #1676 - Engineering Case #747342)================ On Windows systems, the Data Source utility (dbdsn) would have failed with the error “User Data Source “<name>” could not be written to registry” when trying to create a DSN with a name longer than 32 characters. This is a Windows limitation so a more appropriate error message is now displayed. ================(Build #1646 - Engineering Case #744458)================ Mac OSX, SUN, Linux, AIX, and HP platforms no longer require setting the shared library path before launching the Interactive SQL utility. The path is now set by the launcher using the 'LIBRARY_PATHS' setting in the dbisql.ini file. ================(Build #1631 - Engineering Case #744845)================ The context menu for the SQL Statements panel includes a "Help on" menu item if there might be online help for the statement under the mouse. Clicking the "Help on" menu item opens help in a browser if local help is not installed. The process of opening the help has been sped up, especially if no help files were installed. How much faster depends on the network connection. Also, the software is now more tolerant of whitespace between keywords. For example, suppose the SQL Statements panel contained the following valid statement, spread across four lines: CREATE VARIABLE retcode INT Clicking "Help on" in the context menu would have failed to open help because of the newlines that separate the tokens. Now, help for the CREATE VARIABLE statement will be opened. ================(Build #1631 - Engineering Case #744754)================ The INPUT statement or the Import wizard in the Interactive SQL utility could have skipped rows from a TEXT formatted input file if it contained strings which were delimited by quotation marks, and the string contained an apostrophe. This has been fixed. Note, there was no problem if the strings were delimited by apostrophes. ================(Build #1608 - Engineering Case #743562)================ Using the ‘DBCreatedVersion’ DBTools method against a version 16 database would have returned an incorrect value. This has been fixed. ================(Build #1599 - Engineering Case #742979)================ If data from a text file was imported, the contents of the file could have been misinterpreted, resulting in garbage characters being imported. For this to happen, the file must have been encoded in Unicode, must have contained a literal string (enclosed in apostrophes) which contained a backslash ( "\" ) character, and the string must have contained characters which cannot be expressed in 7-bit ASCII. This has been fixed. ================(Build #1599 - Engineering Case #742293)================ A case sensitive database with the DBA user name spelled in a way other than ‘DBA’ (for example, ‘dBA’ or ‘dba’) and password other than ‘sql’ could have failed to be unloaded. This has been fixed. ================(Build #1598 - Engineering Case #743069)================ When manually submitting an error report with a command like: dbsupport -sc ... , usage statistics would have been submitted, but the report itself would not. This has been fixed. ================(Build #1592 - Engineering Case #742549)================ The Validation utility (dbvalid) would have returned EXIT_BAD_DATA(=2) if an invalid object name was included in the object-name-list. If these obests are not found, ideally the error EXIT_FAIL(=1) should be returned instead. This has been fixed. ================(Build #1567 - Engineering Case #740992)================ SQL keywords from the CREATE FUNCTION statement are now suggested properly. Previously, opening the text completer when the caret was after the function name, but before the BEGIN keyword, would cave caused the Interactive SQL utility to suggest only CREATE FUNCTION and CREATE FUNCTION...BEGIN...END. A similar issue with the CREATE PROCEDURE statement was also fixed. ================(Build #1547 - Engineering Case #739802)================ When connected to an IQ database, or if your the Interactive SQL utility is configured to use only IQ, the following "Help" menu items have been disabled: Interactive SQL Help SQL Syntax Keyboard Shortcuts Also, on the "Connect" window, the "Help" button has been disabled if the type selected is "SAP Sybase IQ". ================(Build #1537 - Engineering Case #728742)================ If a database contained a materialized view that used key joins, then unloading and subsequently reloading the database would have failed. This problem has now been fixed. ================(Build #1529 - Engineering Case #738691)================ Using the dbmanageetd.exe command line option –fregex prevented any results from being filtered. This has been fixed. ================(Build #1527 - Engineering Case #738030)================ The Event Trace Data File Management utility (dbmanageetd) did not display the full contents of an ETD file if logging to the file was interrupted (e.g. because of a process crash) and then resumed on a new process. This has been fixed. ================(Build #1516 - Engineering Case #737813)================ In the Options window, if the "Editor" category and the "Tabs" tab were selected, then the Options window was resized, the controls in the tab could have easily resized themselves in a way that made it very difficult to see the values. This has been fixed. ================(Build #1516 - Engineering Case #737801)================ If a single user on a computer ran a number of copies of the Interactive SQL utility (dbisql), it was possible for its options to have been reset to their defaults. For this to have occurred, one dbisql process would have to have be starting, when another dbisql process was shutting down. This has now been fixed. ================(Build #1472 - Engineering Case #733155)================ During silent installs of a SQL Anywhere Monitor SP, the Migration tool’s progress window was being displayed. This has been fixed so that the progress window is no longer displayed during a silent install. ================(Build #1471 - Engineering Case #733905)================ Attempting to export a result set which contained character data to a Microsoft Access database would have failed with a message saying that 'there is no data type in the destination database that corresponds to "char".' This has been fixed. ================(Build #1463 - Engineering Case #733469)================ The automatic text completer used in the Interactive SQL utility could have behaved incorrectly in a CREATE TRIGGER statement. The following have been fixed: - When suggesting SQL statements, "CREATE TRIGGER" could have appeared in the list of suggestions twice. - When writing a CREATE TRIGGER statement, the text completer would have suggested only "CREATE TRIGGER" statements rather than keywords that matched what was typed so far. - Only SQL keywords and owner names were suggested. Now, table names are also suggested. Issues related to other types of CREATE statements have been fixed: - CREATE ROLES and CREATE LDAP SERVER statements were never suggested. They are now suggested where appropriate. - If the second token in a CREATE statement was misspelled the completer would have suggested only statements which started with the keyword CREATE. Pressing Enter would then have replaced the entire statement text with one of the CREATE statements, which was seldom the user's intent. ================(Build #1451 - Engineering Case #732617)================ It was not possible to cancel statements which contained a brace character ( { or } ), even if the brace was in a comment. This has been fixed so that such statements can now be canceled. ================(Build #1451 - Engineering Case #732594)================ The "Plan Viewer" menu item could have been incorrectly disabled if there had not yet been a connection to a database. This has been fixed. ================(Build #1451 - Engineering Case #732584)================ When editing a binary table value in either the Interactive SQL utility, or Sybase Central, an assertion error would have been reported if the existing value was not null. This has been fixed. ================(Build #1446 - Engineering Case #731939)================ Binary values could have been unexpectedly truncated when being displayed to a console window, or in Interactive SQL if the program was configured to display result sets as text. This has been fixed. ================(Build #1443 - Engineering Case #731965)================ If a user ID and an encrypted password was given on the Interactive SQL utility's command line, the "Connect" dialog would have always opened, even if the user ID and password were sufficient to open a connection. This has been fixed. Now, Interactive SQL will attempt to open the connection with the given connection parameters. ================(Build #1443 - Engineering Case #731945)================ The Fast Launcher feature of Interactive SQL and Sybase Central has an option to automatically terminate the Fast Launcher process if the program is not used for some number of minutes. The mechanism for terminating the process could have failed, leaving running, but unused processes, which were visible in the Windows Task Manager. This has been fixed. ================(Build #1433 - Engineering Case #731199)================ Attempting to connect to a SAP HANA database using a system ODBC data source, would have failed with a message which said that the host name and port were missing. This has been fixed. Note that connecting using user data sources worked as expected. If was only system data sources that were affected by this problem. ================(Build #1433 - Engineering Case #731172)================ The Interactive SQL utility did not display BLOB types of data from SAP HANA tables correctly. This has been fixed. ================(Build #1432 - Engineering Case #731071)================ When connected to an SAP HANA database, attempting to display BINARY, VARBINARY, or LONG VARBINARY data would have resulted in a message saying that the result set could not be displayed. This has been fixed. ================(Build #1432 - Engineering Case #730919)================ Clicking the Cancel button in the Spatial Viewer window could have failed to cancel the execution. At that point, the Spatial Viewer could not then be closed. This problem has been fixed, and execution can now be cancelled. ================(Build #1431 - Engineering Case #730928)================ The Interactive SQL utility could have crashed when viewing binary values if the long value window was closed before the server returned the complete cell value. This has been fixed. ================(Build #1430 - Engineering Case #730800)================ Some uniqueidentifier column values could have been displayed as "(IMAGE)" in the result set table. This has been fixed. ================(Build #1430 - Engineering Case #730797)================ The fix for Engineering case 728776 introduced a bug which caused the CREATE TABLE ON clause of the INPUT statement to fail with a message saying that the destination table did not exist. This has been fixed. ================(Build #1430 - Engineering Case #730789)================ The INPUT statement and the Import wizard could have failed while importing spatial data if the source column did not have a SRID constraint, the data contained an embedded non-zero SRID, and Interactive SQL was creating a new table to hold the imported data. This has now been fixed. ================(Build #1429 - Engineering Case #730652)================ If the Fast Launcher was enabled, but was unable to initialize, the Interactive SQL utility or Sybase Central would have crashed when the Fast Launcher was subsequently disabled. This has been fixed. ================(Build #1426 - Engineering Case #730485)================ The Interactive SQL utility shows result sets using a scrollable table, which can be searched. With "Match case" selected in the "Find in Results" window, dbisql would still have performed a case-insensitive search. This has been fixed. Also, It was possible for the table cell which contained the matched text to be hidden under the "Find in Results" window. Now, the window is automatically moved out of the way. ================(Build #1419 - Engineering Case #729852)================ The Text Completer has an option to open automatically when typing to suggest object names. Opening the Query Editor window had an inadvertent side-effect of always turning off this option. This has been fixed so the Query Editor no longer permanently turns off the option. ================(Build #1418 - Engineering Case #728886)================ When the "Show all result sets" option is on, the Interactive SQL utility will display all the result sets returned by a query. If a statement produced more than one result set, and the Export Wizard was used to export those result sets, clicking the "Next" button on the first page would have returned an error saying that only one result set can be exported to an ODBC data source. This message would have been returned even when not exporting to a database. This has been fixed. The Export Wizard now supports exporting multiple result sets to text files, HTML files, and XML files. ================(Build #1410 - Engineering Case #728865)================ The Index Consultant would have reported an error message when opened if the SQL statement being analyzed contained a semicolon as part of an identifier name or a literal string. This has been fixed. ================(Build #1409 - Engineering Case #728776)================ Attempting to import data into a table for which the user did not have permission to select rows, they would have failed with the incorrect error message "The table you selected ... does not exist." Similarly, the same bad error message would have been presented when exporting into a database table for which the user did not have permission to select rows. These problems have been fixed. The error message now clearly indicates that you don't have permission to select from the table. Users could encounter this problem when executing the INPUT or OUTPUT USING statements, or when using the Import or Export wizards. ================(Build #1409 - Engineering Case #728720)================ A long delay may have been observed when right-clicking in the SQL Statements panel, especially if the internet connection was slow, or if there was no internet connected at all. This has been fixed. Note, the problem does not occur if the documentation has been installed locally. ================(Build #1403 - Engineering Case #728245)================ ‘Digital Signature’ has been added to the default Key Usage for non-certificate authorities for the Certificate Creation utility (createcert). Some OpenSSL implementations return an error if the peer certificate does not have ‘Digital Signature’ in its key usage. ================(Build #1381 - Engineering Case #727096)================ Clicking the "Single Step" menu item executes the SQL statement containing the editing caret, then selects the next statement in the "SQL Statements" panel. If the SQL statements were separated by the word "GO", the single stepping could have failed depending on the specific statement being executed, resulting in a syntax error that referred to a trailing letter "G" in the statement. This has been fixed.

SQL Remote for SQL Anywhere - Configuration Scripts

================(Build #2086 - Engineering Case #779496)================ If a database used a Turkish character set, the SQL Remote sr_add_message_server system procedure may have failed. The likelihood of this problem occurring has now been ameliorated. To avoid the problem, SQL Remote option names should always be specified using the documented lowercase letters for databases using a Turkish character set.

SQL Remote for SQL Anywhere - Database Tools Interface

================(Build #2339 - Engineering Case #802216)================ If SQL Remote scanned a STOP SUBSCRIPTION and START SUBSCRIPTION command for the same subscription, it was possible for SQL Remote to have crashed if an operation that belonged to this subscription was scanned between the two commands. This has now been fixed. ================(Build #1716 - Engineering Case #749714)================ If the SET REMOTE OPTION statement had been used to store message control parameters in the database, it was possible for SQL Remote to have failed to gather the message control parameters from the database in very rare circumstances. The user would typically be prompted with a dialog box to fill in the message control parameters, and all the parameters would be blank. If the message control parameters were manually entered in the dialog, SQL Remote would succeed. This has now been fixed.

SQL Remote for SQL Anywhere - Extraction Utility for Adaptive Server Anywhere

================(Build #2075 - Engineering Case #777476)================ The Extraction utility (dbxtract) would have failed to extract indexes defined on materialized views. This would have resulted in a failure during rebuild if an IMMEDIATE REFRESH materialized view was extracted, as a unique index on the view is mandatory. The issue could have been worked around by adding the –xv switch to dbxtract to not extract views. This problem has now been fixed, and indexes defined on materialized views are now extracted.

SQL Remote for SQL Anywhere - File Messaging for Adaptive Server Anywhere

================(Build #1436 - Engineering Case #730270)================ SQL Remote always assumes that all databases involved in replication share the same character set. By default, SQL Remote will always apply source CHAR data to a target database using the default character set for the operating system it is running on, ignoring the source data character set. When using a database character set that is different than the default character set for the operating system, dbremote must be instructed to perform explicit data conversion to that character set on its connection string: e.g. dbremote -c “CHARSET=utf8;…” or instruct dbremote to always use the CHAR character set of the target database to apply the remote CHAR data: e.g. dbremote -c “CHARSET=none;…”

SQL Remote for SQL Anywhere - SQL Remote for Adaptive Server Anywhere

================(Build #2095 - Engineering Case #772485)================ In passthrough mode for SQL Remote the server also placed calls to internal procedures into the transaction log. If these call statements were replicated to the remote side, errors would have ocurred. This has been fixed. ================(Build #1755 - Engineering Case #752638)================ SQL Remote for SQL Anywhere could have failed with the following error: SQL statement failed: (-685) Resource governor for 'prepared statements' exceeded when it was running in a continuous mode with the command line option –x. This problem has now been fixed.

Sybase Central Java Viewer - Java Viewer

================(Build #2148 - Engineering Case #786117)================ The Unload Database wizard could have crashed after unloading the database. The crash was intermittent, and happened only rarely. It has been fixed. ================(Build #2027 - Engineering Case #772137)================ It was not possible to launch Sybase Central by executing scjview on Mac OSX 10.8 with XCode 6 installed. This has been fixed. ================(Build #2008 - Engineering Case #769881)================ On Chinese RedHat 7 Linux, the syntax highlighting editor chose an inappropriate (proportional) font. This choice caused a number of problems: - the caret was displayed in the wrong location - it was impossible to move the caret predictably - the Text Completer made inappropriate suggestions - the line and column information on the status bar were incorrect These have now been fixed. ================(Build #2008 - Engineering Case #769880)================ When comparing databases, if a procedure, function, trigger, or event included a string literal containing a comment token (one of /*, */, //, or --), then Sybase Central could either have raised an assertion, or reported unhandled statements. This has been fixed. ================(Build #1932 - Engineering Case #764854)================ Deleting a Connection profile may have caused Sybase Central to crash. This has been fixed. ================(Build #1710 - Engineering Case #749383)================ After installing both 32 and 64 bit tools on a 64-bit Linux machine, running the 32-bit Sybase Central would have failed on startup with the error: "The library.../lib64/libulscutil16.so.1 could not be loaded.....". This has been fixed. ================(Build #1584 - Engineering Case #742094)================ Sybase Central may have crashed on startup when the fast launcher was already running. This has been fixed. ================(Build #1584 - Engineering Case #741762)================ Sybase Central would have reported an internal error on startup if it was configured to run with the SAP Java Virtual Machine (JVM) rather than the Oracle JVM which ships with SQL Anywhere. This has been fixed. ================(Build #1439 - Engineering Case #731620)================ The Deadlocks tab for a database could have reported that deadlock collection was not enabled, when it was in fact enabled. This has been fixed. ================(Build #1437 - Engineering Case #731452)================ On rare occasions Sybase Central could have crashed on startup. This has been fixed.

UltraLite - Runtime Libraries

================(Build #2704 - Engineering Case #815884)================ UltraLite clients could fail to sync with "Invalid sync sequence ID for remote..." errors with certain HTTP intermediaries and multiple MobiLink servers. This has been fixed. ================(Build #2462 - Engineering Case #807004)================ MobiLink sync clients now use improved error handling with some HTTP intermediaries. ================(Build #2311 - Engineering Case #800244)================ Customers were previously recommened when developing Windows 10 applications to reference the UltraLite Windows 8.1 libraries. However it was found that applications referencing these 8.1 libraries may fail the Windows App Certification Kit (WACK) tests, preventing them from being published to the Microsoft app store. UltraLite libraries built specifically for Windows 10 that will pass the WACK are now provided. ================(Build #2230 - Engineering Case #794181)================ The runtime would have crashed if a temporary table exceeded the maximum row size. This has been fixed. The runtime will now correctly report the error SQLE_MAX_ROW_SIZE_EXCEEDED. ================(Build #2221 - Engineering Case #793459)================ When executing a query containing a comparison operator in the WHERE clause, the UltraLite runtime could have returned incorrect rows, or failed to return the expected rows. This would have occurred when the rows had NULL values for the index used to perform the query. This has been fixed. ================(Build #2177 - Engineering Case #788994)================ On iOS (or Mac OS X), UltraLite synchronizations could have reported a protocol error on a network failure, rather than succeeding or reporting the correct stream error. This has been fixed. ================(Build #2177 - Engineering Case #788991)================ If an UltraLite client crashed or was terminated in the middle of a download-only synchronization, it was possible for the client to enter a state where all subsequent synchronizations would fail with SQLE_UPLOAD_FAILED_AT_SERVER and the MobiLink log would report mismatched sequence IDs. This has been fixed. ================(Build #2158 - Engineering Case #786956)================ The UltraLite WinRT component was failing the Windows App Certification Test. This has now been fixed. The main impact of this fix is that the Close() methods of the following classes were renamed to CloseObject(): IndexSchema, TableSchema, DatabaseSchema, Table, ResultSet, PreparedStatement, and Connection. This is because these classes implicitly implement the Windows.Foundation.IClosable interface, which has a Close() method. The CloseObject() method performs actions specific to the UltraLite component. ================(Build #2141 - Engineering Case #785272)================ The UltraLite Runtime library could have caused a crash when processing nested queries, typically with at least 32 levels of nesting. This has been fixed. Now, if UltraLite cannot process such queries due to resource constraints, a SQLE_RESOURCE_GOVERNOR_EXCEEDED error is signaled. ================(Build #2136 - Engineering Case #784517)================ The Close method of the Connection class of the UltraLite WinRT component was not visible in the projection to JavaScript, even though it was visible in the projections to C++ and C#. This has been fixed by the addition of the method CloseJS to Connection, which is equivalent to Close, and is visible in the JavaScript projection. Similarly, the Close method in the following classes were not visible in the projection to JavaScript: DatabaseSchema IndexSchema PreparedStatement ResultSet Table TableSchema This has been fixed by adding CloseJS methods to these classes. ================(Build #2119 - Engineering Case #783123)================ The LOCATE() and REPLACE() functions could have failed to match strings when using multi-byte encodings. This has now been fixed. ================(Build #2116 - Engineering Case #782834)================ UltraLite may have corrupted SQL statement string literals which contained escapes. This has been corrected. ================(Build #2087 - Engineering Case #779838)================ When using encrypted databases on Mac OS X systems, or iOS devices, memory was leaked by the runtime. This has been fixed. ================(Build #2024 - Engineering Case #772133)================ Calling the system function ML_GET_SERVER_NOTIFICATION() would have failed on iOS using a client identity stored in the database. This has been fixed. ================(Build #2007 - Engineering Case #769394)================ Using UltraLite on iOS 8 beta software resulted in synchronization errors with HTTPS. This has been fixed ================(Build #2006 - Engineering Case #765434)================ Creating a publication with an article greater than 256 bytes in length would have resulted in crash of the UltraLite runtime. This has been corrected so that articles of up to 2048 bytes in length are now supported. SQLE_STRING_PARM_TOO_LONG is now reported when a publication predicate is >=2048 bytes. The database must be created or rebuilt to access the larger publication article size. Older databases will continue to work with a max publication article size of 256 bytes. Databases created or rebuild with this change will run on older runtimes provided publication articles are <= 256 bytes. ================(Build #1958 - Engineering Case #765281)================ In low memory situations, or where the maximum cache size was set low, it was possible for the UltraLite runtime to crash. This has been fixed. ================(Build #1952 - Engineering Case #765928)================ Transient device write failures (that is, OS file-write primitives intermittently reporting failure) could have resulted in subsequent UltraLite database corruption. This has been fixed. Note, device I/O errors still require the database to be restarted. Corrupt databases can be detected with the Validate API or ulvalid utility. ================(Build #1857 - Engineering Case #760165)================ The SetBytes methods of the ResultSet and Table classes would have truncated the byte array argument if the array was larger than 32 KB. This has been fixed. The same incorrect behaviour would also have occurred with the SetString method of ResultSet and Table, and with the SetParameterBytes method of PreparedStatement. These cases have also been fixed. ================(Build #1826 - Engineering Case #757696)================ In some cases, an updated row in a table marked as “synchronize all” would have been uploaded as an UPDATE with a pre and post image, rather than an INSERT, which is what all rows in such a table are expected to be uploaded as. Also, if there was an uncommitted update on a row in that type of table, it wouldn’t be uploaded at all. These bugs have been fixed. ================(Build #1804 - Engineering Case #756129)================ Queries comparing a timestamp with time zone column to a string literal could have returned incorrect results. This has been fixed. ================(Build #1794 - Engineering Case #755388)================ With UltraLite for WinRT, file transfers with the stream parameter “compression=zlib” would have failed, resulting in MobiLink communication error code 224. This has been fixed. ================(Build #1768 - Engineering Case #753632)================ In certain cases, duplicate download rows may not have been detected by UltraLite, and rather, silently applied as if AllowDownloadDupRows were active. This has been corrected so that duplicate download rows will now signal SQLE_PRIMARY_KEY_NOT_UNIQUE errors, or SQLE_DUPLICATE_ROW_FOUND_IN_DOWNLOAD warnings (if the AllowDownloadDupRows option is specified). ================(Build #1766 - Engineering Case #753708)================ The REPLACE string function could have crashed on certain inputs such as: "select replace( 'XAAAAAAAXBBBBXBBBB', 'XAAAAAAA', 'Z' )". This has been fixed ================(Build #1741 - Engineering Case #751859)================ On Windows, the UltraLite runtime would have crashed when the database file grew to over 2 GB in size. On Android, a SQLE_DEVICE_ERROR (-305) was signaled when the database file grew to over 2 GB. This has been fixed so that the UltraLite database files can now grow to 4 GB as documented. ================(Build #1734 - Engineering Case #750611)================ When a row was inserted into a table with a UNIQUE INDEX, with a NULL value in the index key and matching an existing index key with NULLs considered equal, the error SQLE_INDEX_NOT_UNIQUE was signaled. This has been fixed. Note that UltraLite does not support the clause “WITH NULLS NOT DISTINCT” for the CREATE INDEX statement. Therefore, an index key should be considered unique if it contains NULL in at least one column. ================(Build #1576 - Engineering Case #742023)================ Methods of the C++ DatabaseManager class could have failed or returned null, but not set any error in the supplied ULError object. For example, OpenConnection would return null but set the error code to 0 (NOERROR) if the database-manager was not initialized. Now an error is set. ================(Build #1553 - Engineering Case #740226)================ Using '*' when specifying the publication list would have caused the UltraLite runtime to crash during a synchronization. This has been fixed. ================(Build #1530 - Engineering Case #738779)================ The UltraLite Runtime could have crashed when setting a blob parameter on a prepared statement if the statement was a query and there were more parameters than columns in the result set. This has been fixed. ================(Build #1514 - Engineering Case #737612)================ If an application closed the connection to the database used to synchronize before the synchronization completed, the database could have become corrupt. One possible symptom of this corruption was the runtime sending two different progress values for the same publication during synchronization. This has been fixed. Now the runtime will report SQLE_SYNCHRONIZATION_IN_PROGRESS in the close connection call and immediate abort the application. When the database is restarted, recovery will be done to rollback any uncommitted operations that occurred during synchronization. ================(Build #1501 - Engineering Case #736683)================ An UltraLite database could have unnecessarily grown by a small amount during each synchronization, or as a result of executing publication DDL. This has now been corrected. ================(Build #1477 - Engineering Case #734471)================ For queries with LIKE expressions of the form “c LIKE <pattern>”, where column c is a numeric data type, the UltraLite runtime would have given a SQLE_CONVERSION_ERROR during query execution if column c contained data whose string length when converted to a string was longer than the domain size of the numeric type. For example, if c is of type INTEGER and a row in table t contained the integer 12345, then the query SELECT c FROM t WHERE c LIKE ‘1%’ would have caused a SQLE_CONVERSION_ERROR because the length of the string ‘12345’ is greater than 4, the domain size of the INTEGER data type. This has been fixed. ================(Build #1462 - Engineering Case #733309)================ If an UltraLite application placed a cursor on a row and then moved to before the first row and then back to that row again, the row may have been skipped the second time if the row was updated while the cursor was positioned before the first row. This has been fixed. ================(Build #1430 - Engineering Case #730783)================ In rare circumstances, synchronizations could have failed if another thread was performing operations on the database at the same time. This has been fixed. ================(Build #1390 - Engineering Case #714809)================ A result set could have been returned with both the pre-image and the post-image of a row that was updated while the cursor was sitting on it. This has been fixed. Result sets will now return only one copy of the row, the pre-image or the post-image, but not both, depending on the isolation level and when the update is committed.

UltraLite - SQL Preprocessor

================(Build #2231 - Engineering Case #788865)================ An Embedded SQL application using TCHAR datatypes may have encountered a compile error. This has been fixed. ================(Build #1417 - Engineering Case #729556)================ The UltraLite runtime could have caused an application to crash during the optimization of a query with many JOINs (typically more than 12). This has been fixed.

UltraLite - Sample Application

================(Build #2065 - Engineering Case #776845)================ If the UltraLite samples were stored in a folder whose path contains space characters, the UltraLite ESQL CustDB project does not build with Visual Studio. This problem has been corrected.

UltraLite - UL Java Provider for Sybase Central

================(Build #2113 - Engineering Case #782559)================ Attempting to open the New Table wizard would have crashed SQL Central if a new unsaved table was being edited and the table was not saved before opening the wizard. This has been fixed. ================(Build #1432 - Engineering Case #730932)================ When working with an UltraLite database from Sybase Central or the Interactive SQL utility, some errors displayed by those tools may now display slightly different (more detailed) information in the error message.

UltraLite - UltraLite Engine

================(Build #2552 - Engineering Case #811101)================ UltraLite now reads goodbye responses from MobiLink when using HTTP. This was not done previously because of the overhead of another GET and it was not deemed necessary. This would result in MobiLink waiting before closing the socket and could lead to unresponsiveness of the MobiLink server. ================(Build #2242 - Engineering Case #795326)================ If an operation was performed that would result in a SQLE_INDEX_NOT_UNIQUE error, UltraLite would incorrectly report the error as SQLE_PRIMARY_KEY_NOT_UNIQUE. This has been fixed. ================(Build #2114 - Engineering Case #767368)================ UltraLite could have returned less rows than expected for queries utilizing an index scan with conditions on multiple columns in the index. This has been corrected. ================(Build #2023 - Engineering Case #770488)================ UltraLite would have failed to sync using HTTPS on iOS 8. This has now been fixed. ================(Build #1663 - Engineering Case #743857)================ UltraLite may have blocked concurrent database access while synchronizing, during the UL_SYNC_STATE_FINISHING_UPLOAD synchronization state. This has been fixed. ================(Build #1592 - Engineering Case #741954)================ Performance of schema API calls could have been poor for large schemas. Querying any table would have been slow with Sybase Central and Interactive SQL when the schema was large. This has been fixed.

UltraLite - UltraLite for M-Business Anywhere

================(Build #1873 - Engineering Case #724518)================ When using the MobiLink autodial feature, it was possible for the connection attempt to fail initially. Later connection attempts would generally have worked. This has now been corrected.

UltraLite - UltraLite.NET

================(Build #2306 - Engineering Case #799942)================ Customers were previously recommened when developing Windows 10 applications to reference the UltraLite Windows 8.1 libraries. However it was found that applications referencing these 8.1 libraries may fail the Windows App Certification Kit (WACK) tests, preventing them from being published to the Microsoft app store. UltraLite libraries built specifically for Windows 10 that will pass the WACK are now provided. ================(Build #2049 - Engineering Case #772116)================ Execution of sSubqueries whose predicates were sized datatypes, such as char, varchar, and binary, could have crashed due to a data alignment exception. This has been fixed. ================(Build #2020 - Engineering Case #771709)================ When using the UltraLite .NET Data Provider, if the BulkCopyTimeout property was set to 0, an exception would hace occurred during a call to WriteToServer. This has now been fixed. The value 0 means that there is no timeout. ================(Build #2004 - Engineering Case #724850)================ DataTable.Load() , ULTable.GetSchemaTable(), or use of ULIndexSchema object could have thrown the exceptions: "Too many temporary tables in connection" or "Attempted to read or write protected memory. This is often an indication that other memory is corrupt” if used repeatedly. The method ULIndexSchema.Close() has been added, and should be called when an application has finished with an ULIndexSchema instance. This method is also now called internally by UL.Net objects that use ULIndexSchema. ================(Build #1577 - Engineering Case #741303)================ A DATE datatype would have included time components if the value supplied contained a time component. As a result, queries could have returned incorrect results when providing a DATE only predicate value. UltraLite now stores DATE datatypes correctly when setting or changing the value.

UltraLite - Utilities

================(Build #1969 - Engineering Case #766935)================ UltraLite Initialize Database utility would have reported ‘?missing string?’ as the description for some collations (ulinit -Z). This has been fixed.



16.0.0 Security Patches

A security patch contains an already-released version of the software but includes
updated security components.  This process allows the software to be tested more quickly
so that important security fixes are released to customers quickly.  Two build numbers
are recorded for security patches: one build number identifies the build of the software
that was previously tested and released; the other is the build of the new security
components that have been updated in the release

The following security patches have been released.
PLATFORMSOFTWARE BUILDSECURITY COMPONENTS BUILDDESCRIPTION
AIX 2207 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.
AIX 2111 2157 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1p.
HP-IA64 2207 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.
HP-IA64 2111 2157 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1p.
Linux 2271 2300 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1t.
Linux 2271 2283 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1t.
Linux 2184 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.
Linux ARM 2087 2302 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1t.
Linux ARM 2087 2283 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1t.
Linux ARM 2087 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.
Linux ARM 2087 2157 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1p.
Mac OSX x64 2252 2300 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1t.
Mac OSX x64 2252 2283 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1t.
Mac OSX x64 2087 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.
Mac OSX x64 2087 2157 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1p.
Sun Solaris SPARC 2207 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.
Sun Solaris SPARC 2111 2157 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1p.
Sun Solaris x64 2207 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.
Sun Solaris x64 2111 2157 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1p.
Windows 2213 2221 The version of OpenSSL used by all SQL Anywhere products has been upgraded to 1.0.1q. The version of OpenLDAP used by the SQL Anywhere server and client libraries has been upgraded to 2.4.43.