MySQL (64-bit) 歷史舊版本 Page1

最新版本 MySQL 8.0.34.0 (64-bit)

MySQL (64-bit) 歷史版本列表

MySQL 64 位專為企業組織提供關鍵業務數據庫應用程序而設計。它為企業開發人員,數據庫管理員和 ISV 提供了一系列新的企業功能,以提高開發,部署和管理工業強度應用程序的效率.如果您需要 MySQL 數據庫的 GUI,可以下載 - NAVICAT(MySQL GUI)。它支持將 MySQL,MS SQL,MS Access,Excel,CSV,XML 或其他格式導入到 MySQL.MySQL... MySQL (64-bit) 軟體介紹


MySQL 8.0.34.0 (64-bit) 查看版本資訊

更新時間:2023-07-19
更新細節:

What's new in this version:

Account Management Notes:
- A new password-validation system variable now permits the configuration and enforcement of a minimum number of characters that users must change when attempting to replace their own MySQL account passwords. This new verification setting is a percentage of the total characters in the current password. For example, if validate_password.changed_characters_percentage has a value of 50, at least half of the characters in the replacement account password must not be present in the current password, or the password is rejected.
- This new capability is one several that provide DBAs more complete control over password management. For more information, see Password Management. (WL #15751)

Audit Log Notes:
- In MySQL 8.0.33, the audit_log plugin added support for choosing which database to use to store JSON filter tables. It is now possible to specify an alternative to the default system database, mysql, when run the plugin installation script. Use the audit_log_database server system variable (or -D database_name) on the command line together with the alternative database name, for example:
- $> mysql -u root -D database_name -p < audit_log_filter_linux_install.sql
- For additional information about using audit_log plugin installation scripts, see Installing or Uninstalling MySQL Enterprise Audit.
- MySQL Enterprise Audit now supports using the scheduler component to configure and execute a recurring task to flush the in-memory cache. For setup instructions, see Enabling the Audit Log Flush Task. (WL #15567)

Binary Logging:
- Several functions now are added to the libmysqlclient.so shared library that enable developers to access a MySQL server binary log: mysql_binlog_open(), mysql_binlog_fetch(), and mysql_binlog_close().

C API Notes:
- In the calling function, len is initialized to 0 and never changed if net->vio is null. This fix adds a check of net before dereferencing vio.
- A variable in the async client was uninitialized in certain code paths. It is fixed by always initializing the variable.

Compilation Notes:
- Microsoft Windows: For Windows, improved MSVC_CPPCHECK support; and check for MSVC warnings similar to "maintainer" mode. For example, check after all third party configurations are complete.
- Microsoft Windows: For Windows builds, improved WIN_DEBUG_NO_INLINE=1 support; usage would exceed the library limit of 65535 objects.
- Upgraded the bundled robin-hood-hashing from v3.8.1 to v3.11.5.
- Removed the unused extra/libcbor/doc/ directory as extra/libcbor/doc/source/requirements.txt inspired bogus pull requests on GitHub.
- Updated the bundled ICU files from version 69.1 to version 73 for the icu-data-files package.
- ZSTD sources bundled in the source tree were upgraded to ZSTD 1.5.5 from 1.5.0.
- Initialize the internal MEM_ROOT class memory with garbage using the TRASH macro to make easier to reproduce bugs caused by reading initialized memory allocated from MEM_ROOT.
- We now determine stack direction at runtime rather than at configure time.
- Added the OPTIMIZE_SANITIZER_BUILDS CMake option that adds -O1 -fno-inline to sanitizer builds. It defaults to ON.
- Changed the minimum Bison version requirement from v2.1 to v3.0.4. For macOS, this may require installing Bison via a package manager such as Homebrew.
- MySQL now sets LANG=C in the environment when executing readelf to avoid problems with non-ASCII output.
- On macOS, MySQL would not compile if rapidjson was installed via Homebrew. The workaround was to brew unlink rapidjson.
- MySQL would not build with -DWITH_ZLIB=system; it'd complain about not finding the system zlib library despite finding it.
- Deprecation and Removal Notes:
- Important Change: Since MySQL provides other means of performing database dumps and backups with the same or additional functionality, including mysqldump and MySQL Shell Utilities, the mysqlpump client utility program has become redundant, and is now deprecated. Invocation of this program now produces a warning. You should keep in mind that mysqlpump is subject to removal in a future version of MySQL, and move applications depending on it to another solution, such as those mentioned previously. (WL #15652)
- Replication: The sync_relay_log_info server system variable is deprecated in this release, and getting or setting this variable or its equivalent startup option --sync-relay-log-info now raises a warning.
- Expect this variable to be removed in a future version of MySQL; applications which make use of it should be rewritten not to depend on it before this happens.
- Replication: The binlog_format server system variable is now deprecated, and subject to removal in a future version of MySQL. The functionality associated with this variable, that of changing the binary logging format, is also deprecated.
- The implication of this change is that, when binlog_format is removed, only row-based binary logging, already the default in MySQL 8.0, will be supported by the MySQL server. For this reason, new installations should use only row-based binary logging, and existing ones using the statement-based or mixed logging format should be migrated to the row-based format. See Replication Formats, for more information.
- The system variables log_bin_trust_function_creators and log_statements_unsafe_for_binlog, being useful only in the context of statement-based logging, are now also deprecated, and are thus also subject to removal in a future release of MySQL.
- Setting or selecting the values of any of the variables just mentioned now raises a warning. (WL #13966, WL #15669)
- Group Replication: The group_replication_recovery_complete_at server system variable is now deprecated, and setting it produces a warning. You should expect its removal in a future release of MySQL. (WL #15460)
- The mysql_native_password authentication plugin now is deprecated and subject to removal in a future version of MySQL. CREATE USER, ALTER USER, and SET PASSWORD operations now insert a deprecation warning into the server error log if an account attempts to authenticate using mysql_native_password as an authentication method.
- Previously, if the audit_log plugin was installed without the accompanying audit tables and functions needed for rule-based filtering, the plugin operated in legacy filtering mode. Now, legacy filtering mode is deprecated. New deprecation warnings are emitted for legacy audit log filtering system variables. These deprecated variables are either read-only or dynamic.
- (Read-only) audit_log_policy now writes a warning message to the MySQL server error log during server startup when the value is not ALL (default value).
- (Dynamic) audit_log_include_accounts, audit_log_exclude_accounts, audit_log_statement_policy, and audit_log_connection_policy. Dynamic variables print a warning message based on usage:
- Passing in a non-NULL value to audit_log_include_accounts or audit_log_exclude_accounts during MySQL server startup now writes a warning message to the server error log.
- Passing in a non-default value to audit_log_statement_policy or audit_log_connection_policy during MySQL server startup now writes a warning message to the server error log. ALL is the default value for both variables.
- Changing an existing value using SET syntax during a MySQL client session now writes a warning message to the client log.
- Persisting a variable using SET PERSIST syntax during a MySQL client session now writes a warning message to the client log.
- MySQL enables control of FIPS mode on the server side and the client side using a system variable and client option. Application programs can use the MYSQL_OPT_SSL_FIPS_MODE option to mysql_options() to enable FIPS mode on the client. Alternatively, it is possible to handle FIPS mode directly through OpenSSL configuration files rather than using the current server-side system variable and client-side options. When MySQL is compiled using OpenSSL 3.0, and an OpenSSL library and FIPS Object Module are available at runtime, the server reads the OpenSSL configuration file and respects the preference to use a FIPS provider, if one is set. OpenSSL 3.0 is certified for use with FIPS.
- To favor the OpenSSL alternative, the ssl_fips_mode server system variable, --ssl-fips-mode client option, and the MYSQL_OPT_SSL_FIPS_MODE option now are deprecated and subject to removal in a future version of MySQL. A deprecation warning prints to standard error output when an application uses the MYSQL_OPT_SSL_FIPS_MODE option or when a client user specifies the --ssl-fips-mode option on the command line, through option files, or both.
- Prior to being deprecated, the ssl_fips_mode server-side system variable was dynamically settable. It is now a read-only variable (accepts SET PERSIST_ONLY, but not SET PERSIST or SET GLOBAL). When specified on the command line or in the mysqld-auto.cnf option file (with SET PERSIST_ONLY) a deprecation warning prints to the server error log. (WL #15631)
- The mysql_ssl_rsa_setup program originally provided a simple way for community users to generate certificates manually, if OpenSSL was installed on the system. Now, mysql_ssl_rsa_setup is deprecated because MySQL Community Edition no longer supports using yaSSL as the SSL library, and source distributions no longer include yaSSL. Instead, use MySQL server to generate missing SSL and RSA files automatically at startup (see Automatic SSL and RSA File Generation). (WL #15668)
- The keyring_file and keyring_encrypted_file plugins now are deprecated. These keyring plugins are superseded by the component_keyring_file and component_keyring_encrypted_file components. For a concise comparison of keyring components and plugins, see Keyring Components Versus Keyring Plugins. (WL #15659)
- Previously, the MySQL server processed a version-specific comment without regard as to whether any whitespace followed the MySQL version number contained within it. For example, the comments /*!80034KEY_BLOCK_SIZE=1024*/ and /*!80034 KEY_BLOCK_SIZE=1024*/ were handled identically. Beginning with this release, when the next character following the version number in such a comment is neither a whitespace character nor the end of the comment, the server issues a warning: Immediately starting the version comment after the version number is deprecated and may change behavior in a future release. Please insert a whitespace character after the version number.
- You should expect the whitespace requirement for version-specific comments to become strictly enforced in a future version of MySQL.
- The MySQL client library currently supports performing an automatic reconnection to the server if it finds that the connection is down and an application attempts to send a statement to the server to be executed. Now, this feature is deprecated and subject to removal in a future release of MySQL.
- The related MYSQL_OPT_RECONNECT option is still available but it is also deprecated. C API functions mysql_get_option() and mysql_options() now write a deprecation warning to the standard error output when an application specifies MYSQL_OPT_RECONNECT. (WL #15766)

IPv6 Support:
- NDB Cluster: NDB did not start if IPv6 support was not enabled on the host, even when no nodes in the cluster used any IPv6 addresses

Performance Schema Notes:
- The type used for the Performance Schema clone_status table's gtid_executed column has been changed from VARCHAR(4096) to LONGTEXT

SQL Syntax Notes:
- CURRENT_USER() can now be used as a default value for VARCHAR and TEXT columns in CREATE TABLE and ALTER TABLE ... ADD COLUMN statements
- When used in this way, these functions are also included in the output of SHOW CREATE TABLE and SHOW COLUMNS, and referenced in the COLUMN_DEFAULT column of the Information Schema COLUMNS table where applicable.
- If you need to insure that values having the maximum possible length can be stored in such a column, you should make sure that the column can accommodate at least 288 characters (255 for the user name and 32 for the host name, plus 1 for the separator @). For this reason—while it is possible to use one of these functions as the default for a CHAR column, it is not recommended due to the risk of errors or truncation of values.

Functionality Added or Changed:
- Important Change: For platforms on which OpenSSL libraries are bundled, the linked OpenSSL library for MySQL Server has been updated from OpenSSL 1.1.1 to OpenSSL 3.0. The exact version is now 3.0.9. More information on changes from 1.1.1 to 3.0 can be found at https://www.openssl.org/docs/man3.0/man7/migration_guide.html.
- Binary packages that include curl rather than linking to the system curl library have been upgraded to use curl 8.1.1.

Fixed:
- Important Change: The default value of the connection_memory_chunk_size server system variable, when introduced in MySQL 8.0.28, was mistakenly set at 8912. This fix changes the default to 8192, which is the value originally intended.
- NDB Cluster: The fix for a previous issue introduced a slight possibility of unequal string values comparing as equal, if any Unicode 9.0 collations were in use, and the collation hash methods calculated identical hash keys for two unequal strings.
- InnoDB: Possible congestion due to purging a large number of system threads has been fixed.
- InnoDB: ddl::Aligned_buffer now uses the standard memory allocator and not kernel memory management.
- InnoDB: An upgrade from MySQL 5.7 to MySQL 8.0.32 might fail due to deprecated configuration parameters innodb_log_file_size or innodb_log_files_in_group. The workaround is to start MySQL 8.0.32 with --innodb-redo-log-capacity=206158430208.
- InnoDB: The rules for aggregating entries in the redo log have been fixed.
- InnoDB: Several errors due to tablespace deletion and the buffer pool have been fixed.
- Packaging; Group Replication: The group replication plugin from the Generic Linux packages did not load on some platforms that lacked a compatible version of tirpc.
- Replication: Changes in session_track_gtids were not always propagated correctly.
- Replication: By design, all DDL operations (including binary log operations such as purging the binary log) acquire a shared lock on the BACKUP_LOCK object, which helps to prevent simultaneous backup and DDL operations. For binary log operations, we checked whether any locks existed on BACKUP_LOCK but did not check the types of any such locks. This caused problems due to the fact that binary log operations should be prevented only when an exclusive lock is held on the BACKUP_LOCK object, that is, only when a backup is actually in progress, and backups should be prevented when purging the binary log.
- Now in such cases, instead of checking for locks held on the BACKUP_LOCK object, we acquire a shared lock on BACKUP_LOCK while purging the binary log.
- Replication: In all cases except one, when mysqlbinlog encountered an error while reading an event, it wrote an error message and returned a nonzero exit code, the exception being for the active binary log file (or any binary log where the format_description_log_event had the LOG_EVENT_BINLOG_IN_USE_F flag set), in which case it did not write a message, and returned exit code 0, thus hiding the error.
- Now mysqlbinlog suppresses only those errors which are related to truncated events, and when doing so, prints a comment rather than an error message. This fix also improves the help text for the --force-if-open option.
- Replication: Compressed binary log event handling was improved.
- Replication: A transaction consisting of events each smaller than 1 GiB, but whose total size was larger than 1 GiB, and where compression did not make it smaller than 1 GiB, was still written to the binary log as one event bigger than 1 GiB. This made the binary log unusable; in effect, it was corrupted since neither the server nor other tools such as mysqlbinlog could read it.
- Now, when the compressed data grows larger than 1 GiB, we fall back to processing the transaction without any compression.
- Group Replication: In a group replication setup, when there was a source of transactions other than the applier channel, the following sequence of events was possible:
- Several transactions being applied locally were already certified, and so were associated with a ticket, which we refer to as Ticket 2, but had not yet been committed. These could be local or nonlocal transactions.
- A view is created with Ticket 3, and must wait on transactions from Ticket 2.
- The view change (VC1) entered the GR applier channel applier and waited for the ticket to change to 3.
- Another group change, and another view change (VC2), occurred while the transactions from Ticket 2 were still completing.
- This gave rise to the following issue: There was a window wherein the last transaction from Ticket 2 had already marked itself as being executed but had not yet popped the ticket; VC2 popped the ticket instead but never notified any of the participants. This meant that VC1 continued to wait indefinitely for the ticket to change, and with the additional effect that the worker could not be killed.
- We fix this by checking for the need to break each second so that this loop is responsive to changes in the loop condition; we also register a new stage, so that the loop is more responsive to kill signals.
- Group Replication: Removed a memory leak discovered in Network_provider_manager::open_xcom_connection().
- Group Replication: When a group action was sent to the group and the connection was killed on the coordinator, group members were in different states, with members which received the coordinated action waiting for the member that executed it, and the member which started execution having nothing to process, which caused problems with coordination of the group.
- Now in such cases, we prevent this issue from occurring by causing group actions to wait until all members have completed the action.
- Group Replication: Cleanup of resources used by OpenSSL connections created indirectly by group replication was not carried out as expected at all times. We fix this by adding cleanup functionality that can be called at any time such connections are created by group replication.
- JSON: When the result of JSON_VALUE() was an empty string and was assigned to a user variable, the user variable could in some cases be set to NULL instead
- With this fix, the query just shown now returns (1, 0), as expected
- JSON: Some JSON schemas were not always processed correctly by JSON_SCHEMA_VALID()
- In rare cases, MySQL server could exit rather than emit an error message as expected
- The internal resource-group enhancement added in MySQL 8.0.31 and refactored in MySQL 8.0.32 is now reverted
- An in-place upgrade from MySQL 5.7 to MySQL 8.0, without a server restart, could result in unexpected errors when executing queries on tables. This fix eliminates the need to restart the server between the upgrade and queries.
- A fix in MySQL 8.0.33 made a change for ORDER BY items already resolved so as not to resolve them again (as is usually the case when a derived table is merged), but this did not handle the case in which an ORDER BY item was itself a reference.
- Changes in session_track_gtids were not always handled correctly
- Some pointers were not always released following statement execution
- Some instances of subqueries within stored routines were not always handled correctly
- Fortified parsing of the network packet data sent by the server to the client
- Encryption enhancements now strengthen compliance and remove the use of deprecated APIs.
- When a column reference given by table name and column name was looked up in the function find_item_in_list(), we ignored that the item searched for might not have a table name, as it was not yet resolved. We fix this by making an explicit check for a null table name in the sought-after item.
- Deprecated the lz4_decompress and zlib_decompress command-line utilities that exist to support the deprecated mysqlpump command-line utility.
- Queries using LIKE '%...%' ran more poorly than in previous versions of MySQL.
- In Bounded_queue::push(), when Key_generator::make_sortkey() returns UINT_MAX (error), then no key has been produced; now when this occurs, we no longer update the internal queue.
- As part of this fix, push() now returns true on error
- The authentication_oci plugin is fixed to allow federated and provisioned users to connect to a DB System as a mapped Proxy User using an ephemeral key-pair generated through the OCI CLI.
- Some queries using common table expressions were not always processed correctly.
- The internal function compare_pair_for_nulls() did not always set an explicit return value.
- Removed the clang-tidy checks that clash with the MySQL coding style.
- Some subqueries using EXISTS in both the inner and outer parts of the query were not handled correctly.
- Rotated audit log files now always reset the ID value of the bookmark to zero, rather than continuing the value from the previous file.
- Errors were not always propagated correctly when evaluating items to be sorted by filesort.
- The fix for a previous issue with ROLLUP led to a premature server exit in debug builds.
- Simplified the implementation of Item_func_make_set::val_str() to make sure that we never try to reuse any of the input arguments, always using the local string buffer instead.
- When transforming subqueries to a join with derived tables, with the containing query being grouped, we created an extra derived table in which to do the grouping. This process moved the initial select list items from the containing query into the extra derived table, replacing all of the original select list items (other than subqueries, which get their own derived tables) with columns from the extra derived table.
- This logic did not handle DEFAULT correctly due to the manner in which default values were modelled internally. This fix adds support for DEFAULT(expression) in queries undergoing the transform previously mentioned. This fix also solves an issue with item names in metadata whereby two occurrences of the same column in the select list were given the same item name as a result of this same transform.
- A query of the form SELECT * FROM t1 WHERE (SELECT a FROM t2 WHERE t2.a=t1.a + ABS(t2.b)) > 0 should be rejected with Subquery returns more than 1 row, but when the subquery_to_derived optimization was enabled, the transform was erroneously applied and the query returned an incorrect result.
- Handling of certain potentially conflicting GRANT statements has been improved.
- A query using both MEMBER OF() and ORDER BY DESC returned only a partial result set following the creation of a multi-valued index on a JSON column. This is similar to an issue fixed in MySQL 8.0.30, but with the addition of the ORDER BY DESC clause to the prblematic query.
- For index skip scans, the first range read set an end-of-range value to indicate the end of the first range, but the next range read did not clear the stale end-of-range value and applies this stale value to the current range. Since the indicated end-of-range boundary had already been crossed in the previous range read, this caused the reads to stop, causing multiple rows to be missed in the result.
- We fix this by making sure in such cases that the old end-of-range value is cleared.
- The debug server asserted on certain operations involving DECIMAL values.
- All instances of adding and replacing expressions in the select list when transforming subqueries to use derived tables and joins have been changed so that their reference counts are maintained properly.
- Index Merge (see Index Merge Optimization) should favor ROR-union plans (that is, using RowID Ordered Retrieval) over sort-union plans if they have similar costs, since sort-union requires additionally sorting of the rows by row ID whereas ROR-union does not.
- For each part of a WHERE clause containing an OR condition, the range optimizer gets the best range scan possible and uses all these range scans to build an index merge scan (that is, a sort-union scan). If it finds that all the best range scans are also ROR-scans, the range optimizer always proposes a ROR-union scan because it is always cheaper than a sort-union scan. Problems arose when the best range scan for any one part of an OR condition is not a ROR-scan, in which case, the range optimizer always chose sort-union. This was true even in cases, where it might be advantageous to choose a ROR-scan (even though it might not be the best range scan to handle one part of the OR condition), since this would eleminate any need to sort the rows by row ID.
- Now, in such cases, when determining the best range scan, the range optimizer also detects whether there is any possible ROR-scan, and uses this information to see whether each part of the OR condition has at least one possible ROR-scan. If so, we rerun the range optimizer to obtain the best ROR-scan for handling each part of the OR condition, and to make a ROR-union path. We then compare this cost with the cost of a sort-union when proposing the final plan.
- Selecting from a view sometimes raised the error Illegal mix of collations ... for operation '=' when the collation used in the table or tables from which the view definition selected did not match the current session value of collation_connection.
- Valid MySQL commands (use and status) and C API functions (mysql_refresh, mysql_stat, mysql_dump_debug_info, mysql_ping, mysql_set_server_option, mysql_list_processes, and mysql_reset_connection) could write an error message to the audit log, even though running the command or calling the function emitted no such error.
- Increased the maximum fixed array size to 8192 instead of 512. This fixes an issue with mysqladmin extended status requests, which can exceed 512 entries.
- The mysqldump --column-statistics option attempted to select from information_schema.column_statistics against MySQL versions before 8.0.2, but this now generates the warning column statistics not supported by the server and sets the option to false.
- The function used by MySQL to get the length of a directory name was enhanced.
- Executing a query with an implicit aggregation should return exactly one row, unless the query has a HAVING clause that filters out the row, but a query with a HAVING clause which evaluated to FALSE sometimes ignored this, and returned a row regardless.
- For a query with a derived condition pushdown where a column in the condition needs to be replaced, a matching item could not found, even when known to be present, when the replacement item was wrapped in a ROLLUP while the matching item was not.
- The presence of an unused window function in a query, along with an ORDER BY that could have been eliminated, led to an unplanned server exit.
- ORDER BY RANDOM_BYTES() had no effect on query output
- Fixed an issue which could occur when loading user-defined functions
- Concurrent execution of FLUSH STATUS, COM_CHANGE_USER, and SELECT FROM I_S.PROCESSLIST could result in a deadlock. A similar issue was observed for concurrent execution of COM_STATISTICS, COM_CHANGE_USER, and SHOW PROCESSLIST.
- The mysqldump utility could generate invalid INSERT statements for generated columns.
- During optimization, range-select tree creation uses logic which differs based on the left-hand side of the IN() predicate. For a field item, each value on the right-hand side is added to an OR tree to create the necessary expression. In the case of a row item comparison (example: WHERE (a,b) IN ((n1,m1), (n2, m2), ...)), an expression in disjunctive normal form (DNF) is needed. A DNF expression is created by adding an AND tree with column values to an OR tree for each set of RHS values, but instead the OR tree was added to the AND tree causing the tree merge to require exponential time due to O(n2) runtime complexity.
- When using SELECT to create a table and the statement has an expression of type GEOMETRY, MySQL could generate an empty string as the column value by default. To resolve this issue, MySQL no longer generates default values for columns of type GEOMETRY under these circumstances.

MySQL 8.0.33.0 (64-bit) 查看版本資訊

更新時間:2023-05-02
更新細節:

What's new in this version:

Audit Log Notes:
- MySQL Enterprise Audit previously used tables in the mysql system database for persistent storage of filter and user account data. For enhanced flexibility, the new audit_log_database server system variable now permits specifying other databases in the global schema namespace at server startup. The mysql system database is the default setting for table storage

Compilation Notes:
- Microsoft Windows: Added MSVC Code Analysis support for Visual Studio 2017 and higher. This adds a new MSVC_CPPCHECK (defaults to OFF) CMake option that either enables or disables this analysis on the current directory and its subdirectories
- Downgraded curl deprecation warnings to -Wno-error for curl versions greater than 7.86 when MySQL is built with a GNU compiler or clang
- On macOS, added -framework CoreFoundation and -framework SystemConfiguration when linking the curl interface to link with shared system libraries as needed
- Replaced the MY_INCLUDE_SYSTEM_DIRECTORIES macro with library interfaces
- Improved CMake code to support alternative linkers
- Removed the deprecated Docs/mysql.info file from the build system
- Added a top-level .clang-tidy file and associated .clang.tidy files in the strings/ and mysys/ directories. Also enabled compdb support to enable clang-tidy usage on header files
- Removed several unmaintained or unused C++ source files for functionality such as uca-dump and uctypedump
- Added a CMake build option to enable colorized compiler output for GCC and Clang when compiling on the command line. To enable, pass -DFORCE_COLORED_OUTPUT=1 to CMake
- On Windows, also install .pdb files for associated .dll files if they are found for 3rd-party libraries
- Enterprise Linux 8 and Enterprise Linux 9 builds now use GCC 12 instead of GCC 11
- Building with -static-libgcc -static-libstdc++ now also builds the bundled protobuf with static libraries, as required.

Component Notes:
- INSTALL COMPONENT now includes the SET clause, which sets the values of component system variables while installing one or more components. The new clause reduces the inconvenience and limitations associated with the other ways of assigning variable values. For usage information, see INSTALL COMPONENT Statement

Deprecation and Removal Notes:
- User-defined collations (see Adding a Collation to a Character Set) are now deprecated. Either of the following now causes a warning to be written to the log:
- Any occurrence of COLLATE followed by the name of a user-defined collation in an SQL statement
- Use of the name of a user-defined collation as the value of collation_server, collation_database, or collation_connection
- You should expect support for user-defined collations to be removed in a future version of MySQL

MySQL Enterprise Notes:
- MySQL Enterprise Edition now provides data masking and de-identification capabilities based on components, rather than being based on a plugin library that was introduced in MySQL 8.0.13. The component implementation provides dedicated privileges to manage dictionaries and extends the list of specific types to include:
- Canada Social Insurance Number
- United Kingdom National Insurance Number
- International Bank Account Number
- Universally Unique Identifier (UUID)
- An improved table-based dictionary registry replaces the file-based dictionary used by the plugin. For a summary of the differences between the component and plugin implementations, see Data-Masking Components Versus the Data-Masking Plugin. Existing plugin users should uninstall the server-side plugin and drop its loadable functions before installing the new MySQL Enterprise Data Masking and De-Identification components

Performance Schema Notes:
- The Performance Schema Server Telemetry Traces service is added in this release. An interface which provides plugins and components a way to retrieve notifications related to SQL statements' lifetime.
- For more information on this interface, see the Server telemetry traces service section in the MySQL Source Code documentation.

The following were added:
- Status variable Telemetry_traces_supported. Whether server telemetry traces is supported. (Boolean)
- TELEMETRY_ACTIVE column was added to the threads table. Indicates whether the thread has an active telemetry session attached.

Functionality Added or Changed:
- Important Change: For platforms on which OpenSSL libraries are bundled, the linked OpenSSL library for MySQL Server has been updated to version 1.1.1t. Issues fixed in OpenSSL version 1.1.1t are described at https://www.openssl.org/news/cl111.txt
- Replication: As part of ongoing work to change old terminology used in MySQL products, the terms “master”, “slave”, and “MTS” have been replaced in error messages relating to MySQL Replication by “source”, “replica”, and “MTA”, respectively. This includes all error messages listed in messages_to_clients.txt and messages_to_error_log.txt relating to replication; the present task does not perform this replacement for messages used in other contexts.
- See the MySQL 8.0 Error Message Reference, for more information
- Replication: mysqlbinlog --start-position now accepts values up to 18446744073709551615, unless the --read-from-remote-server or --read-from-remote-source option is also used, in which case the maximum is 4294967295
- Binary packages that include curl rather than linking to the system curl library have been upgraded to use curl 7.88.1
- The use of a generated column with DEFAULT(col_name) to specify the default value for a named column is not permitted and now emits an error message

Fixed:
- NDB Cluster: Occasional temporary errors which could occur when opening a table from the NDB dictionary while repeatedly performing concurrent schema operations were not retried
- NDB Cluster: During iteration, ordered index scans retain a cursor position within each concurrently scanned ordered index fragment. Ordered index fragments are modified and balanced as a result of committing DML transactions, which can require scan cursors to be moved within the tree. When running with query threads configured (AutomaticThreadConfig set to 1), multiple threads can access the same index fragment tree structure, and the scans of multiple threads can have their cursors present in the same structure.
- The current issue arose due to an assumption in the logic for moving scan cursors when committing DML operations that all scan cursors belonged to the LDM thread owning the index fragment, which did not allow for the possibility that such fragments might belong to query threads
- InnoDB: Dead code removal
- InnoDB: Error messages related to innodb_doublewrite moved to the error log
- InnoDB: Prevent online DDL operations from accessing out-of-bounds memory
- InnoDB: ALTER TABLE ... AUTO_INCREMENT could be set to less than MAX + 1 and not forced to MAX + 1
- InnoDB: Innodb_data_pending_fsyncs could show extremely high inaccurate values because of a variable overflow
- Partitioning: Some IN() queries on partitioned tables were not always handled correctly
- Partitioning: Queries using the INDEX_MERGE optimizer hint was not handled correctly in all cases
- Replication: XA transactions whose XIDs contained null bytes could not be recovered
- Replication: When binlog_order_commits was set equal to 1, for any two transactions and for any sub-step of the commit phase, the transaction that was written to the binary log first did not always execute the sub-step first, as expected
- Replication: The binary log recovery process did not report all possible error states
- Replication: Following CHANGE REPLICATION SOURCE TO SOURCE_CONNECTION_AUTO_FAILOVER=1, failover generated a number of misleading warnings in the log that implied there were problems when in fact conditions were those expected for such a failover. These log messages have been updated accordingly
- Replication: When a transaction failed, as a side effect, extraneous error messages relating the replication data repositories were written to the log. Now in such cases, we suppress such error messages, which are not directly related to the issue of the failed transaction or its cause
- Replication: Setting binlog_order_commits to OFF could lead to a missed GTID in the next binary log file's Previous_gtids event.
- Our thanks to Yewei Xu and the Tencent team for the contribution
- Replication: Corrected the SQL statements suggested in the error message text for ER_RPL_REPLICA_ERROR_RUNNING_QUERY.
- Our thanks to Dan McCombs for the contribution
- Replication: A hash scan builds a hash of changes, scans the target table or index, and applies any matching change for the current entry. In the build phase, it uses only the before image, and skips any after image. Problems arose in some cases because generated columns were computed for the (skipped) after image, leading to replication errors. This is fixed by not computing generated columns any longer for seek-only calls such as hash scans.
- Our thanks to dc huang for the contribution
- Replication: In certain rare cases, it was possible to set gtid_mode=OFF for one session while another session, after WAIT_FOR_EXECUTED_GTID_SET() was issued by a user in this second session, was still waiting for the next GTID set from the first session. This could result in the second session waiting indefinitely for the function to return
- Group Replication: Accessing the Performance Schema replication_group_communication_information and replication_group_member_stats tables in parallel sometimes caused subsequent group replication operations to hang
- Group Replication: In certain cases, the group replication secondary node unexpectedly shut down while purging the relay log
- Group Replication: When shutting down the Group Replication plugin, the order in which the associated events were reported the error log sometimes led to confusion. To remove any doubts, we now make sure that Plugin group_replication reported: 'Plugin 'group_replication' has been stopped. is in fact the last log message relating to the shutdown, written only when all other events associated with shutting down the plugin have been logged
- Microsoft Windows: The authentication_fido_client plugin stopped responding during the authentication process if it was unable to find a FIDO device on the Windows client host
- In certain cases, CONVERT(utf8mb3_column USING UTF16) was rejected with the error Cannot convert string 'x--...' from binary to utf16
- When joining two tables on a string column, and the column from one of the tables has an additional predicate comparing it with a temporal literal, constant propagation in some cases incorrectly caused the join condition to be modified such that it used temporal rather than string semantics when comparing the strings. This caused incorrect results to be returned from the join
- Error messages returned after calling the mysql_reset_connection() C API function in a prepared statement did not identify the function name properly
- Fixed a regression in a previous fix for an issue with windowing functions.
- Our thanks to Dmitry Lenev for the contribution
- When replacing subqueries in transforms, the internal flag showing whether a given query block contains any subqueries (PROP_SUBQUERY) was not updated afterwards
- A client setting the character set to an impermissible client character set (ucs2, utf16, utf16le, or utf32) could cause unexpected behavior when the client used an authentication plugin
- EXPLAIN ANALYZE displayed 0 when the average number of rows was less than 1. To fix this, we now format numbers in the output of EXPLAIN ANALYZE and EXPLAIN FORMAT=TREE such that numbers in the range 0.001-999999.5 are printed as decimal numbers, and numbers outside this range are printed using engineering notation (for example: 1.23e+9, 934e-6). In addition, trailing zeroes are no longer printed, and numbers less than 1e-12 are printed as 0.
- This helps ensure consistent precision regardless of the number's value and improve readability, while producing minimal rounding errors
- The NTILE() function did not work correctly in all cases
- Some joins on views did not perform correctly
- Fixed an assert in sql/item_strfunc.cc that could potentially lead to issues with the SPACE() function
- Using ROW_COUNT() as the length argument to LPAD() or RPAD() did not perform as expected
- A query with a window function having an expression with a CASE function in its ORDER BY clause could lead to a server exit
- The fix for a previous issue introduced an assertion in debug builds when optimizing a HAVING clause
- When using mysqld_multi, the system that obscures "--password" usage as "--password=*****" would also match "--password-history" and "--password-require-current" definitions as "--password", but now explicitly checks for "--password=" instead
- In some cases, calling the mysql_bind_param() C API function could cause the server to become unresponsive
- The authentication_oci_client plugin was unable to open a valid configuration file if any of its entries contained an equals sign character separated by spaces (for example, key_file = /home/user/.oci/oci_api_key.pem). Now, both 'key=value' and 'key = value' entry formats are supported
- Incorrect results were returned when the result of an INTERSECT or EXCEPT operation was joined with another table. This issue affected these operations in such cases when used with either DISTINCT or ALL
- When preparing a view query, the operation used the system character set (instead of the character set stored in data dictionary) and then reported an invalid character-string error
- Prepared statements that operate on derived tables, including views, could stop unexpectedly due to problems with the code for reopening tables after an error
- Removed an assertion raised in certain cases by the RANDOM_BYTES() function in debug builds
- There was an issue in how persisted variables were set on startup, causing certain variables not to get properly set to their persisted value
- The MAKETIME() function did not perform correctly in all cases
- Some functions with multiple arguments did not produce the expected results
- A table reference in an ORDER BY outside the parenthesized query block in which the table was used, and which query block had no LIMIT or ORDER BY of its own, raised an error
- A left join with an impossible condition as part of an ON clause was not optimized as in MySQL 5.7, so that in MySQL 8.0, the query executed more quickly without the impossible condition than with it. An example of such a query, impossible condition included, is SELECT * FROM t1 JOIN t2 ON t1.c1=t2.c1 AND 1=2
- When a user defined function was part of a derived table that was merged into the outer query block, or was part of a subquery converted to a semi-join, knowledge of whether this UDF was deterministic (or not) was lost during processing
- With JSON logging enabled and an event subclass specified in the audit log filter definition, an empty item ("" : { }) was appended to the end of the logged event
- Some subqueries did not execute properly
- After the asymmetric_encrypt() component function in a SELECT query encountered a NULL field to decrypt, it could return NULL values for other non-NULL encrypted fields
- The server did not always shut down cleanly after uninstalling the audit log plugin
- Certain antijoins were not handled correctly by the server
- When the MySQL 5.7 Optimizer has 2 choices for an index to filter rows, one primary and one secondary, it picks a range scan on the secondary index because the range scan uses more key parts. MySQL 8.0 did not use this logic, instead choosing the primary index to filter rows with WHERE clause filtering. Primary key use is not suitable in such cases due to the presence of LIMIT, and due to the nature of data distribution. The secondary index was not considered while resolving order by due to constant elimination. This resulted in much different query plans in MySQL 5.7 and MySQL 8.0 for the same query.
- We solve this issue in MySQL 8.0 by skipping the constant key parts of the index during order-by evaluation only if the query is constant-optimized, which can be done at this time, but not during LIMIT analysis
- The MySQL data dictionary caches failed lookups of se_private_id values (IDs which are not found), which speeds up execution of code specific to InnoDB, relying on the fact that InnoDB does not reuse these IDs. This assumption does not necessarily hold for other storage engines, most notably NDB, where this problem was resolved previously by not using this cache.
- We extend the previous fix made for NDB so that the cache lookup is now employed only when the table uses the InnoDB storage engine
- Unexpected results were seen in some queries using DENSE_RANK(), possibly with the addition of WITH ROLLUP
- Fixed an assert raised in sql/sql_tmp_table.cc following work done previously to reimplement ROLLUP processing
- Some CTEs that did not use any tables were not always handled correctly
- Accessing rows from a window frame of a window function call present only in the query's ORDER BY list raised an error
- PERCENT_RANK() used with ORDER BY column did not return the correct result
- The --exclude-tables and --include-tables mysqlpump options did not handle views
- Changed the MySQL systemd service unit configuration from After=network-online.target to Wants=network-online.target to ensure that all configured network devices are available and have an IP address assigned before the service is started
- AVG(...) OVER (ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING) did not return the correct result
- A query of the form SELECT 1 FROM t1 WHERE NOT EXISTS (VALUES ROW(1), ROW(2)) caused an assert in debug builds when the subquery_to_derived optimizer switch was enabled
- mysqlimport did not escape reserved word table names when used with the --delete option
- When cloning a condition to push down to a derived table, characters in strings representing conditions were converted to utfmb4 correctly only for values less than 128 (the ASCII subset), and code points outside the ASCII subset were converted to invalid characters, causing the resulting character strings to become invalid. For derived tables without UNION, this led to problems when a column name from the derived table used characters outside the ASCII subset, and was used in the WHERE condition. For derived tables with UNION, it created problems when a character outside the ASCII subset was present in a WHERE condition.
- We fix these issues by initializing the string used for representing the condition in such cases to the connection character set
- Using --single-transaction with mysqldump version 8.0.32 required either the RELOAD or FLUSH_TABLES privilege. This requirement now applies only when both gtid_mode=ON (default OFF) and with --set-gtid-purged = ON|AUTO (default AUTO)
- Many joins using eq_ref access did not perform as well as in previous versions. This issue was first reported in MySQL 8.0.29
- Fixed a number of issues present in the internal documentation for the scramble generator algorithm in sha256_scramble_generator.cc and sha2_password_common.cc.
- Our thanks to Niklas Keller for the contribution
- CREATE USER IF NOT EXISTS added a password history entry even when the user already existed and the password was not updated. This caused a subsequent ALTER USER statement to be rejected
- A hash outer join sometimes incorrectly matched NULL with a decimal zero or an empty string that used a non-padding collation, leading to erroneous results
- An object used internally by ALTER INSTANCE RELOAD TLS was not freed until the number of readers reached 0, under the assumption is that the number of readers should reach 0 fairly frequently. The read lock held during an SSL handshake is generally an expensive operation, with network calls, so when roundtrips between the client and the server took excessively long, the lock was held for a relatively long amount of time. This meant that, when changing the value of this object and there were a sufficient number of incoming SSL connections being made, the number of readers might not reach 0 in a reasonable length of time, leaving the thread holding the lock using 100% of the CPU until the lock was released.
- We fix this by adding a wait after setting the pointer to this object to a new value, but before releasing the old object.
- Our thanks to Sinisa Milivojevic for the contribution
- If mysqldump or mysqlpump could not convert a field's default value to UTF-8 (for instance, if the field was of type BINARY and the default value did not coincide with valid UTF-8), the operation produced results that were not valid to import. Further, using the --hex-blob option did not resolve the issue. We now convert the default value to the system character set. If this fails, the server sends the value as hexadecimal instead to make it more human-readable
- A connection using the C API (libmysqlclient) client library could fail with the FUTURE crypto policy
- While cloning a temporary table for a common table expression which used shared materialization, the cloned temp table was not marked as using hash deduplication, leading to wrong results. We now set the hash field for the cloned temporary table correctly, and update the hidden field count to take this into account
- CREATE EVENT and ALTER EVENT assumed that all values passed to them (other than in a DO clause) resolved as scalars without actually checking the values. This led to assertions when any such values actually rows.
- We now perform an explicit check for the number of columns when resolving such items, and report an error when one produces a row and not a scalar value
- A view reference whose underlying field is a constant is not marked as constant when the reference is part of an inner table of an outer join. It was found that, when pushing a condition down to a derived table, the reference was stripped off and only the underlying field was cloned, which made it a constant, and led to wrong results.
- To fix this problem, we ensure that we do not push such a condition down to the derived table by adding a check to see first whether the table used by the condition matches the derived table or is a constant expression; only when it is one or the other of these do we actually push the condition down

vMix 26.0.0.44 查看版本資訊

更新時間:2023-05-02
更新細節:

What's new in this version:

vMix 26.0.0.44
- NEW NAL CBR option in Streaming Quality settings
- Removed discontinued streaming destinations
- Old FFmpeg removed due to licensing issue


vMix 26.0.0.42
- New Audio Track selection option for SRT input, to select only a single audio track from the source instead of all at once
- New audio support when using SRT inputs with Replay
- Fix for Title Preset CSV exports when using " quotes


vMix 26.0.0.40
- Added support for multiple audio tracks in SRT streams from OBS
- Fixed memory leak in SRT inputs when the source is constantly reconnecting
- Fixed Stream Delay not working with LiveLAN
- Stream settings in Stream Quality that are shared across al streams (such as stream delay) now appear on all streams in the interface but are greyed out
- These settings can still only be edited from Stream 1


vMix 26.0.0.38
- Fix aspect ratio issue with 4:3 sources in List input
- Fix SRT outputs not turning off correctly when also changing other SRT settings at the same time


vMix 26.0.0.37
Fixed:
- line spacing from GT titles imported from PSD not displaying correctly
- List input not displaying rotated/vertical videos correctly


vMix 26.0.0.36
- Added Alpha Mask Effect, use PNG files to quickly add masks to inputs
- Fixed Use Source Settings checkbox for Effects not being saved in preset


vMix 26.0.0.35
- Fixes and improvements to Effects handling in Virtual and PTZ inputs
- Effects will now refer to original inputs effects by default with a checkbox similar to Colour Correction tab
- Audio selection hidden for Insta360 Link to prevent confusion, as Insta360 Link Microphone needs to be added as its own input


vMix 26.0.0.32
- Improved SRT CBR PCR handling
- Various bug fixes for effects including when used with Mirror, GT inputs, Photos/PowerPoint inputs and Virtual inputs when opened from a Preset
- Performance improvement for effects, by only rendering once when used with a static Image input


vMix 26.0.0.30
- Change log not available for this version

更新時間:2023-05-02
更新細節:

PerformanceTest 11.0 Build 1000 查看版本資訊

更新時間:2023-05-02
更新細節:

What's new in this version:

PerformanceTest 11.0 Build 1000
Score Rebalancing:
- Rebalanced PassMark rating calculation to lower the influence of 2D while trying to maintain a similar PassMark rating for an "Average" system
- 3D Mark value calculation updated to take into account lower DX10/11 frames rates so 3D mark value is still comparable to V10
- Disk Mark value calculation changed slightly to take into account higher possible sequential read/write test scores. The mark will remain similar to V10 but will likely be slightly higher on newer drives and lower on older drives
- Baselines from older versions of PerformanceTest will have their DX10 & Dx11 results scaled down when displayed
- Baselines from older versions of PerformanceTest will have their PassMark rating re-calculated when displayed

New Advanced Database Benchmark:
- Added new advanced database benchmark to test local and remote databases (SQLite, MySQL, MariaDB, PostgreSQL, MS SQL Server). This runs a standardised test allowing both the comparison of the relative performance of different Database software and different comparison of different hardware / cloud platforms.
- The following databases are currently supported. MySQL 5.6/5.6/8.0 MS SQL Server 2019 PostgreSQL v15.1
- Results can be optionally uploaded to our website and saved in a baseline. Once we have sufficient data well start to publish some public DB benchmark comparison data.

Disk Tests:
- Increased block size for sequential read/write tests from 32KB to 128KB, this should lead to faster results for newer drives. Disk mark was rebalanced slightly so the mark will remain similar but will likely be slightly higher on newer drives and lower on older drives

3D Tests:
- DX10 test, increased object count during test to 50 Islands, 250 Meteors which will results in a lower frame rate compared to V10
- DX11 test, increased object count during test to 200 jellyfish which will results in a lower frame rate compared to V10
- DX12 test, added a frame limit on the "warp" effect at start to try and prevent a "grid" being displayed on NVidia cards

CPU Tests:
Added MatrixMultiply NEON test for ARM CPUs run during the SSE tests C:
- Added a second page to the CPU test tiles, navigable with UP and DOWN arrows
- Moved Cross-Platform Mark and Gaming Score to second page
- Added Gaming Score as a tile to the CPU results, similar to Cross-Platform mark this is an aggregate of other tests

Advanced CPU Tests:
- Added DNN based face detection test

Advanced Disk Test:
- Added new "Marketing Performance" section with two sets of tests that matches/lines up with CrystalMark Peak Performance profile settings. A Read/Write of Sequential 1 MB Block Size Queue 8 and moved Read/Write Random 4K Queue 32 to this section
- User can specify number of iterations per test and choose to keep new or the best result
- Added collected temperature measurements to the Export TAB/CSV/HTML results after running test
- Adjusted resizing of test window
- Fixed bug where latency was not calculated properly
- Advanced Internet Speed Test
- Adding manual calculation (SumRTT/CountRTT) of Avg Latency if "SmoothedRTT" is not set
- Advanced Memory Test Added a section on the main dialog to display the Latency test result instead of a pop up message box Fixed some controls not being enabled/disabled correctly

Advanced Network Test:
- Added option for multiple simultaneous network tests from the same PerformanceTest client
- Allow uploading of Advanced Network Test results to user's account (i.e. with API Key)

Advanced GPU Compute Test:
- Added new tests for single precision FP (FP32), double precision FP (FP64), integer, fast integer, device memory read and write

Manage Baselines:
- Users can Right-Click the column header in the baseline listview to select the fields they wish to displayed
- Added tooltip to Header Row to display results units (e.g. MOps/Sec) for the tests result column
- Added "Move Up/Move Down" right-click buttons to the Manage Baselines Currently Selected tab to allow display order to be chosen

User Interface:
- Added tooltio to baseline # containing system information
- Added extended tooltip information with individual results for tests that run a number of sub tests Updated 3D models being used, added different models for HDD/SSD/NVme
- Added ARM CPU model Video card colour will change to represent specific manufacturer
- Check for current power mode on laptops and warn user (works on Windows 10 v1809 and beyond)

Warning message pop up condition is:
- On battery: Win 10 & 11 - If not on the Best performance mode
- Plugged in: Win 11 - If not on Balanced or Best performance mode Win 10 - If not on Better performance or Best performance mode
- Added a check if the DirectlO driver failed to load when collecting temperatures to try and prevent an error box being displayed each time there was a temperature sample Added a more informative error message when trying to use a key from a newer version of PT than the current one

Baseline Files:
- Added subtest results for tests to baseline ( SSE (CPU), encryption (CPU), single threaded (CPU), cross-platform (CPU), image filters (2D), direct 2D (2D) )
- Added effective power mode


PerformanceTest 10.2 Build 1017
- Fixed possible issue on certain Ryzen systems where they cannot reboot after running PerformanceTest. Root cause seems to be Motherbaord BIOS Vug in SMBus register settings. This release contains a workaround for the bug. If you see this, symptoms are that the machine fails to detect RAM sticks on a reboot, which might result in 3 beep error and or a black screen. A second reboot or clearing CMOS fixes the problem.


PerformanceTest 10.2 Build 1016
- Fix possible crash at start-up when trying to collect Intel GPU temperatures


PerformanceTest 10.2 Build 1015
- Fixed possible crash at start-up when getting clean HDD name (could be triggered by having a Microsoft storage space device setup)


PerformanceTest 10.2 Build 1014
- Increase frame rate check for DX11 'result too high' error message as some newer high powered cards are well above this limit


PerformanceTest 10.2 Build 1013
- Install/Run from USB, fixed a bug where the default disk location could default to the USB drive letter instead of C: when running the disk test from USB
- DX12 test, added a frame limit (rather than time based) on the "warp" effect at start to try and prevent a "grid" being displayed on nvidia cards
- Advanced CPU, fixed graphing anomaly drawing 0 value as last data point of graph and not drawing graph when only 1 data point existed

Added:
- a check if the DirectlO driver (used for system information and temperature collection) failed to load when collecting temperatures to try and prevent an error box being displayed every sample
- GPU naming for several Ryzen CPUs (542X, 5300U, 5400U, 5700U, 3200, 3400, 3500 U/G/GE, 7000)
- Made a change so the drive tested isn't set to "Unknown" if physical disk number doesn't match the found index (seen on a system with removable sata drives after being removed)
- Initial support for Intel ARC GPU temperature collection
- support for collecting RAM info from systems with more than 64 memory slots
- support for retrieving DDRS SPD on Intel Raptor Lake-S (13th gen)
- support for retrieving DDRS SPD on AMD Ryzen chipsets
- Ryzen 7000 series support for information, temperature and naming integrated graphics


PerformanceTest 10.2 Build 1012
- Fixed Directio driver failing to load on some systems (caused by 22H2 updates) which would prevent temperatures from being collected and could display Windows "Program Compatibility" driver cannot load errors
- Updated system information library to support retrieving CPU info for Intel Raptor Lake, Tremont and Sapphire Rapids chipsets
- Fixed retrieving cache info for Intel hybrid architectures


PerformanceTest 10.2 Build 1010
- Remove dependency on VCRUNTIME140.DLL


PerformanceTest 10.2 Build 1009
- Updated Directio64.sys with another newly Microsoft signed version to fix issues with Windows 11 22H2


PerformanceTest 10.2 Build 1008
- Fixed key.dat not being created correctly when installing to USB
- Added a warning message at start-up if digital signature is invalid for PerformanceTest executable


PerformanceTest 10.2 Build 1007
- Fіхеd ореnіng оf .рtѕсrірt fіlеѕ tо hаndlе АNЅІ аnd unісоdе (wіth ВОМ)
- Ваѕеlіnе uрlоаd, сhаngеd tо НТТР роѕt fоr uрlоаdіng fіlеѕ іnѕtеаd оf FТР (tо рrеvеnt futurе іѕѕuеѕ wіth mоvіng ѕеrvеrѕ аgаіn)
- Аdvаnсеd mеmоrу tеѕt, fіхеd thе рrоgrеѕѕ bаr оvеrflоwіng/rеѕеttіng tо thе ѕtаrt whіlе tеѕtіng
- Fіхеd ореnіng оf kеу.dаt fіlеѕ ѕо thаt bоth АNЅІ аnd UТF8 kеу.dаt fіlеѕ wіll wоrk (рrеvіоuѕlу оnlу UТF8 fіlе fоrmаtѕ wоuld wоrk соrrесtlу)


PerformanceTest 10.2 Build 1006
- Fixed baseline uploading that was broken due to passmark.com moving to a new webserver
- Added check for existence of selected disk before starting test and added error message when not found
- Rebuilt ARM CryptoPP with newest version of VS2019


PerformanceTest 10.2 Build 1005
- Fixed an issue on 12th gen Intel CPUs that could result in the calling thread being locked to a low performing E-Core (thread affinity was not being set back to the original correctly) which would result in lower than expected 2D test scores
- Fixed bug where PerformanceTest failed to generate a API key when running from USB/using a key.dat
- Fix a crash that could occur when using "Copy Results as Text"
- Advanced memory test, fixed a bug preventing the graph from being displayed correctly after running a threaded test


PerformanceTest 10.2 Build 1004
- ARM, Reverted CryptoPP library back to previous version to avoid SHA issues that was causing many problems with internal hashes for the main user interface, the CPU test (encryption) will continue to use the latest CryptoPP version as it is a separate executable. This is an ARM only change.
- OpenCL test, attempt to pick discrete (non-integrated) card when more than one card of the same manufacturer is available (seen on AMD systems with both integrated and discrete AMD cards)


PerformanceTest 10.2 Build 1003
- ARM64, more SHA hashing related fixes that was stopping baselines from being submitted
- Fixed an unusual case where disk partition information would not be displayed if the disk's MBR contained empty partition information for the first entry, this could also stop the disk being displayed for testing


PerformanceTest 10.2 Build 1002
- ARM64, added workaround for broken SHA hashing in CryptoPP (using windows Bcrypt library instead) that was causing licencing errors and failure to show charts
- Made a change so ARM CPU info is loaded at start-up (to help with graphing display)
- Fixed bug in system information collection of Ice Lake-SP (3rd gen Xeon Scalable) CPU info
- Fixed an issue in system information collection when collecting disk partition info where function could return too early


PerformanceTest 10.2 Build 1001
- Fixed incorrect P/E core reporting in user interface (values were switched)


PerformanceTest 10.2 Build 1000
- Updated system information and user interface to distinguish between P (performance) and E (efficiency) cores, for Intel 12th Gen CPUs
- Correct detection of available P and E cores also allowed the correct number of threads be run in the multi-threaded CPU tests
- Updated system information with support to handle multiple CPU caches at each level
- Added '/au' flag that auto-runs PT and then uploads the result to the PassMark website
- Updated Crypto++ library version in use to 8.6, this should result in much faster encryption results on ARM systems due increased use of available CPU instructions and slightly faster results on x86 systems. Previous library didn't use hardware acceleration on ARM, (except for the Apple M1)
- Encryption Test, AES, made some changes to work with latest version of Crypto++. This was required because V8.6 of Crypto++ was much slower than previous releases unless memory buffer usage was also modified.
- Baseline Upload. Simplified the upload window and added use of a API key to upload baseline to associate it with a user. So users can (optionally) upload baselines to their account and track performance changes over time. Upon entering license key and registering, a API key is created/retrieved from server. An free API key can also be requested from Passmark web site once an account is created. Anonymous uploads are still supported.
- Removed 5% difference for uploading baselines check. This allows similar baseline files to be uploaded. Duplicates will still be ignored when creating the global averages however.
- Baseline management, fixed loading of some baselines that were missing system information sections
- Started displaying power source "AC" or "Battery" on system information - old baselines with no information will be N/A. Battery powered systems tend to have reduced performance due to their power plan setup.
- Fixed a crash that could occur in the ARM version when opening the baseline manager dialog
- Made a change on system information tab to display RAM SPD info if only partially collected for system (eg a mix of RAM modules that don't all return SPD details)
- Fixed bug where when running from USB drive, the config file loaded on program start was reading the settings on the desktop install and not the USB drive
- GPU Compute, fixed a crash when starting 3D NBodyGravity test if no DX11 adapters were found
- Added Windows 11 and Server 2019 to baseline manager search
- Added 'View Last Uploaded Baseline...' menu option
- CPU Integer test, made some minor changes to the order of operations in the test, this should stop them being optimised away on non-windows builds which used the CLang compiler instead of Visual Studio (and have no effect on windows builds).


PerformanceTest 10.1 Build 1007
- PDF Test, removed min/max/close buttons from window
- 2D Mark, stopped mark being calculated if some of the older tests (eg simple vectors) hadn't been run
- Disk Test, attempt to stop a crash caused by failing/mis-performing hard disks taking too long to close a file handle after a test Updated system information library to correct some incorrect CPU cache values being returned


PerformanceTest 10.1 Build 1006
- Fixed PerformanceTest not launching as Administrator by default which would result in not collecting temperature and system information correctly


PerformanceTest 10.1 Build 1005
- Added naming support for Ryzen 5/7/9 5000 series integrated video cards
- Started remembering and restoring current PT window position when running 3D tests due to strange behaviour where the window could be moved around different monitors on some setups
- System information updates


PerformanceTest 10.1 Build 1004
- Fixed a potential crash when checking for an updated version
- Updated system information library


PerformanceTest 10.1 Build 1003
Fixed:
- a timing issue that could cause the CPU test to stop immediately and return no results
- a timing issue that could cause the Disk test to stop immediately and return no results or crash
- a crash that could occur when using the German translation and mousing over the your hardware vs the same distribution graph


PerformanceTest 10.1 Build 1002
- CPU Tests - Made wait time slightly longer due to some timeouts waiting for tests to finish on on systems with large thread counts
- CPU, Compression test - Decreased the number of loops performed before checking if the test time was reached to prevent a timeout in system with large thread counts
- Fixed an issue reading AMD 4600/4800 CPU temps
- Fixed an issue preventing AMD GPU temperatures from being read if ADL_Graphics_Platform_Get failed to be loaded from the ADL library


PerformanceTest 10.1 Build 1001
- Disk Tests and Advanced Disk Test, changed calculation of MB values to use international system of units (SI) value for MB (1,000,000 bytes) instead of MiB (1,048,576).
- This brings it closer into line with how disk manufacturers are marketing disk drive speeds.
- Old baseline values for disk tests and disk mark will be converted to new display value.
- Drive Performance and Advanced Disk Test, fixed a bug that was causing RAM disks using logical drive emulation to be excluded from the list of available drives to test


PerformanceTest 10.1 Build 1000
- Score display, due to incredibly slow values for some 2D/3D results on Microsoft Surface ARM systems, now when a score is less than 10 the score will be displayed to 1 decimal place.
- Started filtering "with" out of some CPU names when dynamically creating a Radeon graphics card name (eg Radeon Graphics Ryzen 9 4900HS)
- Baseline custom colours, changed the default yellow to a darker shade for better readability of white text when it is drawn over the graph
- Fixed a crash that could occur in the disk test

Initial release for Windows ARM support. All CPU, 2D, Memory and Disk tests have been natively compiled to run on Windows ARM:
- For 3D tests, due to limited support for some older DX9 and DX10 helper libraries these tests have not been converted and will not be run in the Windows ARM version of PerformanceTest.
- The DX11 test will be run in x86 emulated mode, again due to limited support for some of the DirectX libraries used for them. The DX12 test has been compiled to run natively.
- For the 3D GPU compute tests, the Nbody gravity, Mandelbrot and Qjulia4D have been compiled to run natively on Windows ARM. The OpenCL test cannot be run on Windows ARM.


PerformanceTest 10.0 Build 1011
- Stopped user interface resizing when 3D tests were running, sometimes during the start of DX10 test a resize message would get sent to PT which would fail and display a "Could not reset the Direct3D device" error
- Fixed an issue where is a drive tested was bitlocker encrypted then PerformanceTest might not match the partitions to the physical disk and display "Unknown drive"
- Fixed a possible crash when viewing graphs
- Temperature collection, added support for Intel Tiger Lake chipsets
- Temperature collection, initial support for AMD Ryzen 5000 Series (Family 19h) CPU info and temperatures
- Temperature collection, support for CPU groups when retrieving per-core temperatures for Intel chipsets
- System information, Decreased timeout from 5 seconds to 1 when collection SMART information when waiting for scsi commands to timeout (decrease amount of time spent detecting temp sources and reading smart info when USB drives are attached)


PerformanceTest 10.0 Build 1010
- Fixed an issues generating as 2D mark in windows 7
- Fixed possible crash when loading advanced test result graphs
- Fixed crash that could occur when resizing the user interface when no baselines were selected
- Disabled old DirectWrite code path for fonts and text test as it could crash when it tries to fall back to it


PerformanceTest 10.0 Build 1009
- Updated internal YAML library used for baselines and test results
- Baselines, fixed an issue where there were two instances of b48BitAddressSupported in the SMART info which would break YAML parsing
- Baselines, fixed an issue where there were two instances of iComputeUnits in the video card info which would break YAML parsing
- Baseline chart user interface, increased size of chart buttons
- Baseline chart user interface, move gauge/distribution chart buttons to bottom right of chart
- System Information, added support for NVMe drives behind USB-NVMe bridge (eg JMicron JMS583, Realtek RTL9210, ASMedia ASM2362)
- System Information, added naming support for AMD Ryzen 5/7/9 4000 series integrated graphics
- System Information, fixed a security issue with DirectIO device driver that runs as part of system information collection. Hypothetical exploit was possible that allowed user to bypass operating system restrictions & install arbitrary software. But user would already need to be the elevated Admin user on the local machine to take advantage of exploit. So overall additional risk is low. No usage of this exploit has been seen in the field. New DirectIO version is V13.0


PerformanceTest 10.0 Build 1008
- Ѕуѕtеm Іnfоrmаtіоn, fіхеd а ѕесurіtу іѕѕuе wіth DіrесtІО dеvісе drіvеr thаt runѕ аѕ раrt оf ѕуѕtеm іnfоrmаtіоn соllесtіоn. Нуроthеtісаl ехрlоіt wаѕ роѕѕіblе thаt аllоwеd uѕеr tо bураѕѕ ореrаtіng ѕуѕtеm rеѕtrісtіоnѕ & іnѕtаll аrbіtrаrу ѕоftwаrе. Вut uѕеr wоuld аlrеаdу nееd tо bе thе еlеvаtеd Аdmіn uѕеr оn thе lосаl mасhіnе tо tаkе аdvаntаgе оf ехрlоіt. Ѕо оvеrаll аddіtіоnаl rіѕk іѕ lоw. Nо uѕаgе оf thіѕ ехрlоіt hаѕ bееn ѕееn іn thе fіеld. Nеw DіrесtІО vеrѕіоn іѕ V12.4
- Аdvаnсеd Dіѕk Теѕt, аddеd 16МВ, 32МВ, 16GВ аnd 32GВ орtіоnѕ tо tеѕt fіlе ѕіzеѕ
- Іnсrеаѕеd mаіn wіndоw dеfаult ѕіzе ѕlіghtlу tо ѕhоw аll еlеmеntѕ соrrесtlу


PerformanceTest 10.0 Build 1007
- Advanced memory test, Fixed graphing button not being disabled for latency test in some instances (no graphs available for latency test)
- Memory test, Database operations, limited max amount of test threads to 32 for this test due to timeouts
- 2D Test, Direct2D, fixed a possible situation where the test could return a 0 score when not running at the default resolution
- 2D Test, Direct2D, fixed a penalty calculation error that could result in a score of 0
- PDF Test, fixed a bug that was preventing the test running on windows 8.1
- PDF Test, fixed a bug where the PDF test was attempting to run in windows 8 while the minimum supported was 8.1
- Allowed 2D mark to be generated if missing 1 test (eg windows 8.1 but still failing to run the SVG test)
- Stopped 2D tests setting flag for no DX11 support, windows 8.1 systems with Dx11 support may still fail the SVG test
- Fixed crash that could occur in primes test when run on a single core single threaded CPU
- Fixed a BSOD on startup when running using QEMU


PerformanceTest 10.0 Build 1006
- Fixed a BSOD when running in an Amazon EC2 instance
- Changed cross platform mark to use the best result out of the normal prime tests and a primes test limited to physical cores


PerformanceTest 10.0 Build 1005
- Save as text, added an option to include system information when exporting to tab and semi-colon format text
- 2D Tests - Fonts and text, fixed a possible crash when the test setup fails
- Added encryption sub-tests results
- Cross-platform mark, made some changes so a second primes test is run that is restricted to only physical cores and this score is what is used for cross platform mark


PerformanceTest 10.0 Build 1004
- CPU Test - Compression test, made some changes to increase speed of compression test. Now using std::minstd_rand due to changes in Windows 10 that decreased the speed of the rand() function
- CPU Test - Single thread test, due to changes in the compression test the single thread test will now be slightly higher
- CPU Mark - due to compression and single threaded test changes the CPU mark has been rebalanced for these changes. The single thread test is now weighted slightly more than the other tests
- Advanced Network test - Fixed a bug preventing the TCP server thread starting correctly so the TCP test would stop after a few seconds


PerformanceTest 10.0 Build 1003
- Histogram charts, fixed some issue loading V10 charts, fixed some missing charts for new tests.
- Baseline manager, fixed a crash that could occur when there were no CPU/GPU model names loaded from the chart data
- Started adding PT8 or PT9 flags to all old V8 and V9 baselines when an individual test result is displayed
- Install to USB, fixed a missing subfolder from the media folder that was not being copied to the USB correctly


PerformanceTest 10.0 Build 1002
- CPU tests, Single Threaded, started scaling single threaded score down to be closer to PT9 for better comparability with older results
- 2D Tests, 2D Image Rendering, changed score displayed to thousand images/sec instead of images/sec for better readability
- Fixed drag and drop on main window of loading of baselines not working
- Fixed an issue for the test status window, when running a single CPU test it was not correctly displaying the test that was running


PerformanceTest 10.0 Build 1001
- Change log not available for this version


PerformanceTest 10.0 Build 1000
- Score rebalancing Due to the large amount of changes made to the 2D/3D/CPU/Disk tests all the calculated mark values have been rebalanced and scaled to be similar (but not exactly the same) to that of PerformanceTest 9
- Individual test scores have not been scaled so a direct comparison cannot be made in many cases between version 9 and version 10
- Windows Support No longer supporting Windows versions older than Vista
- Compiler updates We have switched from using Visual Studio 2013 (for V9) to Visual Studio 2019 for V10. Newer compiler versions typically bring improved code optimisation and use of newer CPU instructions
- CPU Tests Enabled compiler optimisations for the CPU tests that had previously been disabled. This has resulted in better performance on newer CPUs when compared to older ones
- Compression Test, replace with a new version that uses the Cypto++ Gzip library. Previous versions of PerformanceTest were using a Adaptive encoding algorithm, which gave good compression rates on text, but was’t in common use. Zip is the defacto standard for real world data compression
- Encryption test, removed Salsa and TwoFish from the sub tests that are run and replaced them with an ECDSA (Elliptic Curve Digital Signature Algorithm) sub test. Previously PerformanceTest V9 had 4 sub-tests. These were, TwoFish, AES, Salsa20 & SHA256. Now we have three sub-tests. AES, ECDSA & SHA256. These are all algorithms that are heavily used in the real world
- Extended Instructions (SSE), added an AVX512 test (when available). AVX512 are a new set of CPU instructions that are in newer CPUs
- Extended Instructions (SSE), fixed a pointer math issue in the test that was referencing incorrect memory locations. Accessing the correct data helped with data alignment and improved test throughput
- Extended Instructions (SSE), made some changes to the SSE/AVX/FMA tests for how the results are retrieved and stored for next calculation loop (using _mm_storeu_ps and mm256_storeu_ps)
- Extended Instructions (SSE), removed custom aligned vector/matrix class and switched to standard vector/matrix class. Changed how matrix data is loaded before test (using _mm256_broadcast_ps)
- Integer Math, made some changes to add better support for out of order execution. This makes the algorithm less linear and gives modern CPUs the change to get more calculations done in parallel
- Added a Cross-platform mark to the CPU test, made up of the Integer, Floating point, sorting and prime test scores. This will be calculated when loading a V9 baseline in V10 if the required scores are available. This cross platform score is not used when generating the overall CPU mark as it based on previously run tests. We envisage that we’ll use these results in the future for cross platform comparisons (x86 PCs vs ARM based mobile devices)
- Physics test, updated Bullet physics engine to version 2.88. Previously we were using 2.83
- 2D Tests Added a new SVG (Scalable Vector Graphics) image rendering test that will open and display several SVG images
- Added a new “PDF Render” test that will open a PDF and scroll through the available pages
- Changed default test size to 1920 x 1080. This should place more load on the video card than was previously the case so frame rates are lower across the board compared to V9. Tests will scale down in size to 1024×768 and a penalty will be applied and will not run at lower resolutions than this
- Direct 2D, increased amount of objects displayed during test
- Windows Interface test, increased size of dialog and number of controls on dialog
- Complex vectors, changed so that rendering loop resets sooner rather than most of the drawing happening off screen towards the end of the test
- Image Filters and Image Rendering, added DX11 versions of this test. Results are a combination of old and new tests
- Due to these updates scores and the 2D mark value in older versions of Windows (eg Windows 7) will be far lower as it isn’t possible to scale meaningfully when multiple tests can’t be run
- 3D Tests DX9, changed default resolution to 1920×1080, test will be scaled down and a penalty applied if it cannot be run at this resolution
- DX9, increased default Anti Aliasing level to 8, changed skybox and number of objects in scene. This was done with the aim of making the test less CPU bound
- DX10, increased default resolution to 1920×1080, test will be scaled down and a penalty applied if it cannot be run at this resolution. This adds load to the video card
- DX10, increased number of islands and meteors during test
- DX10, fixed a bug when enumerating display modes for the DX10 test where no compatible card would be found if there was a large amount (>500) of display modes
- GPU Compute, replaced the Bittonic sort test with an NBody Gravity test. Nobody knew what a Bittonic sort was or how it applied to the real world. NBody simulations on the other hand are a fairly common scientific application and they make a good visual impact
- GPU Compute, increased default size for sub tests to 1920×1080
- GPU Compute, OpenCL test, made some changes to particle size and variables used for calculations so more is happening during the test
- GPU Compute, Mandelbrot test, increased number of iterations 10x to slow down the test execution
- Started allowing ”Microsoft RemoteFX Graphics Device” for 3D support on some VMs
- Memory Tests Increased amount of memory that non-cached tests use to 512MB of RAM (up from 256MB)
- Database Operations, changed to use an in memory sqlite3 database based on SQLite 3.31.1. Will run for maximum available physical cores and use at least 128MB RAM per thread
- Threaded, changed to run a range of threads up the maximum available (eg 2, 4, 8, 16 or 3, 6, 12 depending on available core count) and use highest score. Amount of RAM used will depend on number of threads, <= 16 threads 512 MB, <=64 threads 1 GB, > 64 threads 2 GB
- Threaded and Database operations, added support for processor groups and thread affinity
- Latency test, now will take measurements based on 16KB, 64KB and 8MB ranges (previously was just 64KB) and use the average value of the three tests for the score. This will give a broader range of samples and will result in higher latency figures than PT9
- Disk tests Changed name of “Random Seek” test to “32KQD20” to better represent the test (using 32K block size with a queue depth of 20)
- Added a new “4KQD1” test (using 4k block size with a queue depth of 1)
- Increased test files size to 400MB for write test, 800MB for read test (traditional hard drive). If drive is an SSD then it is 1GB for write test and 2GB for read test
- Removed the CD test
- Advanced Physics Test Added option to allow resolution to be selected
- Updated Bullet physics engine to version 2.88. Previously was using 2.83
- Added message check on exit to stop „not responding“ when closing while using a large number of objects
- Advanced Network Test Added threading, Windows RIO (Registered Input/Output API Extensions) sockets option. These changes were made to achieve higher throughput & lower latency, particularly on 10Gb+ networks. You should now be able to really push the limits of your networking gear with these changes
- Advanced Disk Test Made some changes to try and stop crashes during the advanced disk IOPS test when the hard drive is failing/responding abnormally
- Added temperature collection to test results. Can now choose to display the temperature or the latency heat map when displaying a graph. This has become important as some SSDs throttle down their speed under high temperatures
- Advanced Memory Test
- Added „Threaded“ test option to advanced memory test, currently will run the threaded memory test starting with 1 thread up to (Cores * Threads per core) threads. Each test loop is repeated 3 times and the best result stored.Now opens graph automatically at end of testing
- Updated latency test to use same settings as standard test, running random range latency test for 16KB, 64Kb and 8MB ranges and then averaging the results
- NEW Advanced CPU Test Added an advanced CPU test, this allows individual CPU tests to be run from 1 thread up to a specified number of threads and then the results graphed
- NEW Advanced Internet Speed test Added an implementation of M-Labs (https://www.measurementlab.net) internet speed test that will connect to their servers, performance a 10 second upload test, a 10 second download test and then display the results from that test
- Baseline management Re-enabled choosing of colours for loaded baselines, this will only affect the bar graph colour of the baseline and the text colour will not change
- Localisation Added a section in the installer to allow selection of a supported language, on install first launch PT will now choose that language by default
- Reporting Added percentile options to exported text, formatted text and HTML reports
- Baseline Management Added a way of tracking submitted baselines and displaying them in the advanced baseline management dialog under the “Uploaded” tab
- Scripting Added SETRERUNRESULT to toggle re-run config setting. Choose between keeping BEST and NEW result when re running tests
- Added HIDEBASELINES option to only add current computer results to exported results files
- Added CPU_RUNTEST and ACPU_SETEXPORT commands for scripting the advanced CPU test
- Changed „Result Date“ to be local time instead of UTC for consistency (windows install time was already local time)
- HTML report output, fixed a bug where the Unicode BOM was not being correctly written to the start of the file if the file was opened in append mode and didn’t already exist
- Removed 2 extra line breaks being added at end of records for REPORTSUMMARYCSV command
- Misc Chart display, added percentage difference to baselines when a score the „this computer“ is available. Also added option in the preferences to enable/disable
- Fixed an incorrect error message in the advanced drive performance test when a selected disk did not have enough free space
- Fixed a bug when saving results to an image, the scrollbar width was being applied and causing some result to be hidden
- Added a gray rectangle to custom list view header so the boundary that can be used to resize the columns is highlighted
- System information, Changed ram details in baseline system info to display in GB
- Now displaying „Baseline #X“ instead of just „#X“ on system information window for loaded baselines
- Report exports, added BIOS version and hard drive size as a separate field to the exported report system information
- User Interface, Made back/prev/next buttons in 3D component info view slightly lighter so they stand out more
- No longer displaying PNP ID on Video card system info display
- Fixed some preferences dialog alignments
- CSV export, fixed an issue where multiple „unknown disk“ entries could be output and change column ordering

MySQL 8.0.32.0 (64-bit) 查看版本資訊

更新時間:2023-01-18
更新細節:

MySQL 8.0.31.0 (64-bit) 查看版本資訊

更新時間:2022-10-11
更新細節:

rekordbox 6.6.5 查看版本資訊

更新時間:2022-10-11
更新細節:

What's new in this version:

New:
- RMX EFFECTS are now available for Free and Core plans with mouse operation
- Note: RMX EFFECTS may not be availablewhen MIDI/HID devices, such as controllers or CDJs, are connected
- Added an option to hide playlists in Cloud Library Sync
- Added [Batch Auto Upload setting] for playlists in Cloud Library Sync
- Ability to change trigger timing to add a track to [Histories] in PERFORMANCE mode

Improved:
- Audio processing to enable the EQ adjust of HELIX/VINYL BRAKE in BEAT FX
- Ability to delete [Histories] by folder

Fixed:
- Occasionally unable to log in to Beatport/Beatsource
- Search filter in Beatport/Beatsource Offline Locker worked incorrectly
- On Mac, occasionally the sound was muted for a moment when playing certain tracks
- On Mac, audio may not be output when turning on Beat FX FILTER
- Fixture information on the FIXTURE LIBRARY screen may not be displayed correctly in LIGHTING mode
- Occasionally unable to copy Venue on the FIXTURE LIBRARY screen in LIGHTING mode
- Tracks would disappear from Search results on a CDJ/XDJ when loading them from the Search results when using PRO DJ LINK
- Occasionally effects such as TRANS and ROLL were off beat
- Occasionally it would take time to start sync when using SYNC MANAGER
- The filter function in the attribute column worked incorrectly
- Occasionally audio device changed to external USB audio unexpectedly
- Improved stability and fixes for other minor issues

vMix 25.0.0.34 查看版本資訊

更新時間:2022-07-28
更新細節:

What's new in this version:

vMix 25.0.0.34
- Fixed audio static issue when using Resample audio drop handling with Dante inputs


vMix 25.0.0.33
- Fix for crash when exporting Replay MP4 when audio source is set to a camera with no audio
- Incomplete replay events now automatically filtered out when exporting multiple events


vMix 25.0.0.32
Fixed:
- Fix for error when closing vMix with the latest v13 of Waves VST3 plugins
- VMix now prompts to save existing preset when opening another preset
- Fix for using quotation marks in titles for YouTube Live Stream Now
- Fix for frame delay showing by mistake for VLC inputs as it is not supported


vMix 25.0.0.31
- New Trigger Manager in hamburger menu to easily see all triggers in a preset
- Fixed replay audio issue with certain capture cards, particular Magewell Pro Capture
- vMix will now prompt to stop all outputs when opening a preset instead of just showing an error
- Virtual input will no longer save a duplicate file in a bundle
- Fix automatic audio mixing issues when switching between Replay A and B when both are set to run separately


vMix 25.0.0.29
Fixed:
- issue selecting replay camera angles when using exactly 8 cameras
- issue with Virtual Sets not saving in bundles
- missing LiveLAN tab in Web Controller Titles page
- Web Controller Switcher now supports R for replay, and N for NDI inputs
- It will also show full input titles when hovering mouse over each button
- issue with replay controller window when opening a preset


vMix 25.0.0.27
- Fixed issue where turning on or off tabs like the replay tab may cause other tabbed windows to change position
- Vimeo and Restream have recently discontinued support for logging in via an in-app popup window, due to it using Internet Explorer
- These have now been changed to use the computer's default browser instead
- Twitch login also changed to use computer's default browser, as we expect login via a popup window to no longer work there shortly as well


vMix 25.0.0.24
- Fixed an issue with detecting missing files when opening a preset

MySQL 8.0.30.0 (64-bit) 查看版本資訊

更新時間:2022-07-26
更新細節:

What's new in this version:

- Important Change: A previous change renamed character sets having deprecated names prefixed with utf8_ to use utf8mb3_ instead. In this release, we rename the utf8_ collations as well, using the utf8mb3_ prefix; this is to make the collation names consistent with those of the character sets, not to rely any longer on the deprecated collation names, and to clarify the distinction between utf8mb3 and utf8mb4. The names using the utf8mb3_ prefix are now used exclusively for these collations in the output of SHOW statements such as SHOW CREATE TABLE, as well as in the values displayed in the columns of Information Schema tables including the COLLATIONS and COLUMNS tables0)
- Important Change: When more than one language had the same collation definition, MySQL implemented collations for only one of the languages. This meant that some languages were covered only by utf8mb4 Unicode 9.0 collations that are specific to other languages. This release fixes such issues by adding language-specific collations for those languages that were previously covered only by language-specific collations for other languages.

Compilation Notes:
- On Enterprise Linux, fixed ADD_LINUX_RPM_FLAGS so that the initial values of CMAKE_C_FLAGS and CMAKE_CXX_FLAGS are used before modifying them4)

References: This issue is a regression of:
- Added a new SHOW_SUPPRESSED_COMPILER_WARNINGS CMake option. Enable it to show suppressed compiler warnings, and do so without failing with -Werror. It defaults to OFF8)
- On Windows, deprecation warnings (C4996) were globally disabled with the /wd4996 command-line option; now deprecation warnings are disabled on the localized level where appropriate8)
- Improved GCC 8 support to include -lstdc++fs in order to use std::filesystem8)

Deprecation and Removal Notes:
- Replication: Setting the replica_parallel_workers system variable (or the equivalent server option --replica-parallel-workers) to 0 is now deprecated, and doing so now raises a warning.
- To achieve th same result (that is, use single threading) without the warning, set replica_parallel_workers=1 instead
- The --skip-host-cache server option is now deprecated, and subject to removal in a future release
- Use a statement such as SET GLOBAL host_cache_size = 0, or set host_cache_size in the my.cnf file, instead
- The --old-style-user-limits option causes the server to enforce user limits as they were prior to MySQL 5.0.3, and is intended for backwards compatibility with very old releases. This option is now deprecated, and using it now raises a warning. You should expect this option to be removed in a future release of MySQL, and so you are advised to begin now to remove any dependency your MySQL applications might have on this option.

Generated Invisible Primary Keys (GIPKs):
- MySQL 8.0.30 now supports GIPK mode, which causes a generated invisible primary key (GIPK) to be added to any InnoDB table that is created without an explicit primary key. This enhancement applies to InnoDB tables only.

The definition of the generated key column added to an InnoDB table by GIPK mode is is shown here:
- my_row_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT INVISIBLE PRIMARY KEY
- The name of the generated primary key is always my_row_id; you cannot, while GIPK mode is in effect, use this as a column name in a CREATE TABLE statement that creates a new InnoDB table unless it includes an explicit primary key.
- GIPKs are not enabled by default. To enable them, set the sql_generate_invisible_primary_key server system variable (also introduced in this release) to ON. This setting has no effect on replication applier threads; this means that a replica never generates a primary key for a replicated table that was not created on the source with a primary key.
- You cannot alter a generated invisible primary key while GIPKs are in effect, with one exception: You can toggle the visibility of the GIPK using ALTER TABLE tbl CHANGE COLUMN my_row_id SET VISIBLE and ALTER TABLE tbl CHANGE COLUMN my_row_id SET INVISIBLE.
- By default, generated invisible primary keys can be seen in the output of SHOW CREATE TABLE and SHOW INDEX; they are also visible in MySQL Information Schema tables such as the COLUMNS and STATISTICS tables. You can make them hidden instead by setting show_gipk_in_create_table_and_information_schema to OFF.
- You can exclude generated invisible primary keys from the output of mysqldump using the --skip-generated-invisible-primary-key option added in this release. mysqlpump also now supports a --skip-generated-invisible-primary-key option which excludes GIPKs from its output.
- For more information and examples, see Generated Invisible Primary Keys. For general information on invisible column support in MySQL, see Invisible Columns5)

Keyring Notes:
- The keyring_aws plugin has been updated to use the latest AWS Encryption SDK for C (version 1.9.186)
- The keyring_aws_region variable supports the additional AWS regions supported by the new SDK. Refer to the variable description for a list of supported AWS regions.

Pluggable Authentication:
- The SASL LDAP plugin failed to properly parse Kerberos Key Distribution Center (KDC) host information read from the Kerberos configuration file, resulting in SASL authentication error0)

Security Notes:
- It is now possible to compile the MySQL server package (mysqld + libmysql + client tools) using OpenSSL 3.0 on supported platforms, which should not change the behavior of the server or client programs. For additional information

Spatial Data Support:
- Previously, the ST_TRANSFORM() function added in MySQL 8.0.13 did not support Cartesian Spatial Reference Systems. Beginning with this release, support is provided by this function for the Popular Visualisation Pseudo Mercator (EPSG 1024) projection method, used for WGS 84 Pseudo-Mercator (SRID 3857).

SQL Syntax Notes:
It is now possible to determine whether a REVOKE statement which cannot be executed raises an error or a warning. This is implemented with the addition of two new statement options, listed here with brief descriptions:
- IF EXISTS causes REVOKE to raise a warning rather than an error as long as the target user or role does not exist.
- IGNORE UNKNOWN USER causes REVOKE to raise a warning instead of an error if the target user or role is not known, but the statement would otherwise succeed.
- For a single target user or role and a given privilege or role to be removed, using the IF EXISTS and IGNORE UNKNOWN USER options together in the same REVOKE statement means that the statement succeeds (albeit doing nothing, and with a warning), even if both the target user or role and the privilege or role to be removed are unknown, as long as the statement is otherwise valid. In the case of multiple targets, multiple privileges or roles to be removed, or both, the statement succeeds, performing those removals which are valid, and issuing warnings for those which are not.
- For more information, see REVOKE Statement

XA Transaction Notes:
- Replication: Previously, recovery was not guaranteed when a server node in a replication topology unexpectedly halted while executing XA PREPARE, XA COMMIT, or XA ROLLBACK. To address this problem, MySQL now maintains consistent XA transaction state across a topology using either MySQL “classic” Replication or MySQL Group Replication when a server node is lost from the topology and then regained. This also means that XA transaction state is now propagated so that nodes do not diverge while doing work within a given transaction in the event that a server node halts, recovers, and rejoins the topology.
- For any multi-server replication topology (including one using Group Replication), the XA transaction state propagates consistently, so that all servers remain in the same state at all times. For any such topology of any size (including a single server, as long as binary logging is enabled), it is now possible to recover any server to a consistent state after it has halted unexpectedly and been made to rejoin the topology after dropping out.
- This enhancement is implemented for the case of a single server by adding support for a two-phase XA prepare between the storage engine and the server's internal transaction coordinator (ITC), with the state of the prepare retained by both. This means that the ITC can purge its internal logs safely, without the risk of losing state, should the server halt following the purge. In the single-node case, imposing order of execution between the storage engine and the binary log prevents externalization of GTIDs before the corresponding changes become visible to the storage engine; in a topology comprising multiple servers, this keeps the transaction state from being broadcast to the topology before it is guaranteed to be locally consistent and persistent. In all cases, the state of the XA transaction is extracted from the last binary log file to be written and synchronized with the transaction state obtained from the storage engine.
- A known issue in this release can be encountered when the same transaction XID has been used to execute XA transactions sequentially. If there a disruption in operation occurs while the server is processing XA COMMIT ... ONE PHASE using this same XID, after the transaction has been prepared in the storage engine, the state between the binary log and the storage engine can no longer be reliably synchronized. See
- For more information, see XA Transactions.

Functionality Added or Changed:
- Important Change: Binary packages that include curl rather than linking to the system curl library have been upgraded to use curl 7.83.13)
- Important Change: For platforms on which OpenSSL libraries are bundled, the linked OpenSSL library for MySQL Server has been updated to version 1.1.1o. Issues fixed in OpenSSL version 1.1.1o
- Important Change: The fido2 library included with MySQL, used with the authentication_fido plugin, has been upgraded to version 1.8.0. (Previously, version 1.5.0 was included with MySQL.)
- For more information, see FIDO Pluggable Authentication.
- InnoDB: The innodb_doublewrite system variable, which enables or disables the doublewrite buffer, has two new settings, DETECT_ONLY and DETECT_AND_RECOVER. With the DETECT_ONLY setting, database page content is not written to the doublewrite buffer, and recovery does not use the doublewrite buffer to fix incomplete page writes. This lightweight setting is intended for detecting incomplete page writes only. The DETECT_AND_RECOVER setting is equivalent to the existing ON setting. For more information, see Doublewrite Buffer.
- Thanks to Facebook for the contribution9,  InnoDB: InnoDB now supports dynamic configuration of redo log capacity. The innodb_redo_log_capacity system variable can be set at runtime to increase or decrease the total amount of disk space occupied by redo log files.
- With this change, the number of redo log files and their default location has also changed. From MySQL 8.0.30, InnoDB maintains 32 redo log files in an #innodb_redo directory in the data directory. Previously, InnoDB created two redo log files in the data directory by default, and the number and size of redo log files were controlled by the and innodb_log_files_in_group and innodb_log_file_size variables. These two variables are now deprecated.
- When an innodb_redo_log_capacity setting is defined, innodb_log_files_in_group and innodb_log_file_size settings are ignored; otherwise, those settings are used to compute the innodb_redo_log_capacity setting (innodb_log_files_in_group * innodb_log_file_size = innodb_redo_log_capacity). If none of those variables are set, redo log capacity is set to the innodb_redo_log_capacity default value, which is 104857600 bytes (100MB).
- Several status variables are provided for monitoring the redo log and redo log capacity resize operations.
- As is generally required for any upgrade, this change requires a clean shutdown before upgrading.
- For more information about this feature, see Redo Log.
- Added Ubuntu 22.04 support5)
- The order of the columns in the primary key definition for a few tables in the mysql schema has been changed, so that the columns containing the host name and user name are together in sequence at the start of the primary key. ACL queries on these tables are performed using only the host name and user name, and if those columns are not together in sequence, a full table scan must be performed to identify the relevant record. Placing the host name and user name together means that index lookup can be used, which improves performance for CREATE USER, DROP USER, and RENAME USER statements, and for ACL checks for multiple users with multiple privileges.
- The changed tables are mysql.db, mysql.tables_priv, mysql.columns_priv and mysql.procs_priv. When you upgrade to MySQL 8.0.30 or later, these tables are modified in the second step of the MySQL upgrade process. Use the --upgrade=FORCE option when performing logical upgrades using a backup or export utility such as mysqldump or mysqlpump, which ensures that the table structures are checked and rebuilt with the new column order5,
- The myisam_repair_threads system variable and myisamchk --parallel-recover option were removed8)
- A new mysqldump option --mysqld-long-query-time lets you set a custom value of the long_query_time system variable for mysqldump’s session. Use the new option to increase the elapsed time allowed for mysqldump’s queries before they are written to the slow query log file, in order to avoid unnecessary logging. Thanks to Facebook for the contribution
- Error log components can now be loaded implicitly at startup before the InnoDB storage engine is available. This new method of loading error log components loads and enables the components defined by the log_error_services variable.
- Previously, error log components had to be installed first using INSTALL COMPONENT and were only loaded after InnoDB was fully available, as the list of components to load was read from the mysql.components table, which is an InnoDB table.

Implicit load of error log components has these advantages:
- Log components are loaded early in the startup sequence, making logged information available sooner.
- It helps avoid loss of buffered log information should a failure occur during startup.
- Loading log components using INSTALL COMPONENT is not required, simplifying error log configuration.
- For more information about this feature, see Error Log Configuration.
- If you have previously installed loadable log components using INSTALL COMPONENT and you list those components in a log_error_services setting that is read at startup (from an option file, for example), your configuration should be updated to avoid startup warnings. For more information, see Error Log Configuration Methods.
- MySQL Enterprise Audit’s audit log file can now be extended with optional data fields to show the query time, the number of bytes sent and received, the number of rows returned to the client, and the number of rows examined. This data is available in the slow query log for qualifying queries, and in the context of the audit log it similarly helps to detect outliers for activity analysis. It is delivered to the audit log through new component services that you set up as an audit log filtering function. The extended data fields can only be added when the audit log is in JSON format (audit_log_format=JSON), which is not the default setting.
- MySQL Server’s AES_ENCRYPT() and AES_DECRYPT() functions now support the use of a key derivation function (KDF) to create a cryptographically strong secret key from information such as a password or a passphrase that you pass to the function. The derived key is used to encrypt and decrypt the data, and it remains in the MySQL Server instance and is not accessible to users. Using a KDF is highly recommended, as it provides better security than specifying your own premade key or deriving it by a simpler method when you use the function. The functions support HKDF (available from OpenSSL 1.1.0), for which you can specify an optional salt and context-specific information to include in the keying material, and PBKDF2 (available from OpenSSL 1.0.2), for which you can specify an optional salt and set the number of iterations used to produce the key.
- A new system status variable Tls_library_version shows the runtime version of the OpenSSL library that is in use for the MySQL instance. The version of OpenSSL affects features such as support for TLSv1.3.
- From MySQL 8.0.30, MySQL Enterprise Encryption’s functions are provided by a component, rather than being installed from the openssl_udf shared library. The new functions provided by the component use only the generally preferred RSA algorithm, not the DSA algorithm or the Diffie-Hellman key exchange method, and they follow current best practice on minimum key size. The component functions also add support for SHA3 for digests (provided that OpenSSL 1.1.1 is in use), and do not require digests for signatures, although they support them.
- If you upgrade to MySQL 8.0.30 from an earlier release where the functions are installed manually from the openssl_udf shared library file, the functions you created remain available and are supported. However, these legacy functions are deprecated from this release, and it is recommended that you install the component instead. The component functions are backward compatible, so RSA public and private keys, encrypted data, and signatures that were produced by the legacy functions can be used with the component functions. For the component functions to support decryption and verification for content produced by the legacy functions, you must set the new system variable enterprise_encryption.rsa_support_legacy_padding to ON (the default is OFF).
- The component functions generate public and private RSA keys in PKCS #8 format. They allow a minimum key size of 2048 bits, which is a suitable minimum RSA key length for current best practice. You can set a maximum key size up to 16384 bits using the system variable enterprise_encryption.maximum_rsa_key_size, which defaults to a maximum key size of 4096 bits.
- Connections whose users have the CONNECTION_ADMIN privilege are not terminated when MySQL Server is set to offline mode, which is done by changing the value of the offline_mode system variable to ON. Previously, checking for connections that had the CONNECTION_ADMIN privilege could cause a race condition because it involved accessing other threads. Now, a flag for each thread caches whether or not the user for the thread has the CONNECTION_ADMIN privilege. The flag is updated if the user privilege changes. When offline mode is activated for the server, this flag is checked for each thread, rather than the security context of another thread. This change makes the operation threadsafe.
- In addition, when offline mode is activated, connections whose users have the SYSTEM_USER privilege are now only terminated if the user that runs the operation also has the SYSTEM_USER privilege. Users that only have the SYSTEM_VARIABLES_ADMIN privilege, and do not have the SYSTEM_USER privilege, can set the offline_mode system variable to ON to activate offline mode. However, when they run the operation, any sessions whose users have the SYSTEM_USER privilege remain connected, in addition to any sessions whose users have the CONNECTION_ADMIN privilege. This only applies to existing connections at the time of the operation; users with the SYSTEM_USER privilege but without the CONNECTION_ADMIN privilege cannot make new connections to a system in offline mode.
- Performance Schema provides instrumentation for performance monitoring of Group Replication memory usage.
- See Monitoring Group Replication Memory Usage with Performance Schema Memory Instrumentation.

Fixed:
- InnoDB: A TRUNCATE TABLE operation failed to remove data dictionary entries for columns that were dropped using ALGORITHM=INSTANT.
- Thanks to Marcelo Altmann for the contribution5)
- InnoDB: An incorrect nullable column calculation on tables with instantly added columns caused data to be interpreted incorrectly4)
- InnoDB: After upgrading to MySQL 8.0.29, a failure occurred when attempting to access a table with an instantly added column4)
- InnoDB: Only the physical position of instantly added columns was logged, which was not sufficient for index recovery. The logical position of columns was also required2)
- InnoDB: The field_phy_pos debug variable in the InnoDB sources was not updated for child tables during a cascading update operation9)
- InnoDB: Some instances of the rec_get_instant_row_version_old() function in the InnoDB sources did not check for row versioning6)
- InnoDB: The read_2_bytes() function in the InnoDB sources, which reads bytes from the log buffer, returned a null pointer5)
- InnoDB: In the Performance Schema instrumentation for InnoDB read-write locks, lock acquisition failures and successes for TRY (no-wait) operations were instrumented incorrectly5)
- InnoDB: In a specific locking scenario, an implicit lock was not converted to an explicit lock as expected, triggering a lock_rec_has_expl(LOCK_X | LOCK_REC_NOT_GAP, block, heap_no, trx) debug assertion failure9)
- InnoDB: A check that determines if a table has instantly added columns was performed for each column, which affected the performance of ADD and DROP COLUMN operations on tables with numerous columns. The check is now performed once per table7)
- InnoDB: A workload that generated a large number of lock requests and numerous timeouts caused a long semaphore wait failure. To address this issue, optimizations were implemented to reduce the number of exclusive global lock system latches2)
- InnoDB: The m_flush_bit in the redo log block header, which was set for the first block of multiple blocks written in a single log write call, provided no benefit and has been removed4)
- InnoDB: Functions used by MySQL Enterprise Backup to inform InnoDB that it has started reading redo logs and to advance the cursor to a larger log sequence number (LSN) now require the BACKUP_ADMIN privilege0)
- InnoDB: Fixed clang-tidy and cppcheck warnings, which included the removal of unused code and unnecessary checks7)
- InnoDB: Recovery of a redo log file mini-transaction (mtr) caused a debug assertion failure on a MySQL Server instance with a small innodb_log_buffer_size setting.
- Thanks to Mengchu Shi for the contribution2)
- InnoDB: Compiling with the WITH_VALGRIND source configuration option produced Wunused-variable warnings2)
- InnoDB: Multiple issues with the lock-free hash table (ut_lock_free_hash_t) were addressed4)
- InnoDB: A query on a generated column with a secondary index caused a failure. The field number representing the position of the generated column was not valid7)
- InnoDB: Memory consumption was greater than expected when updating and inserting rows with multi-valued index columns. The memory allocated for multi-valued columns for each row update was held until the file handle was released2)
- InnoDB: The UT_LOCATION_HERE structure in the InnoDB sources was not used consistently1)
- InnoDB: A table object needed to retrieve an array of values from a multi-valued index column when computing the value of a generated column was unavailable3)
- InnoDB: A 4GB tablespace file size limit on Widows 32-bit systems has been removed. The limit was due to an incorrect calculation performed while extending the tablespace1)
- InnoDB: Hash and random generator functions in the InnoDB sources were improved4,
- InnoDB: A DROP TABLE operation on a table with a discarded tablespace caused an unnecessary assertion failure
- InnoDB: A query on a table with a JSON column returned only a partial result set after adding a multi-valued index
- InnoDB: Purging a record with multiple binary large object values raised an insertion failure due to a mini-transaction (mtr) conflict
- InnoDB: Enabling the adaptive hash index (AHI) on a high-concurrency instance caused temporary AHI search latch contention while the hash index was being built.
- Thanks to Zhou Xinjing from CDB Team at Tencent for the patch
- Packaging: The SASL LDAP clientside plugin was missing from the MySQL Community packages for Windows.
- Replication: When a table definition diverged between the source and the replica because the replica had an extra primary key, updates and deletes on the replica would fail if that table had an index that was present both on the source and the replica. Primary keys for an InnoDB table are automatically included in all indexes, and the replication applier needs values for all parts of the key to be included in an event in order to search the index. Previously, the applier checked that all the user-defined key parts were present, but the check did not cover hidden primary keys that were automatically included. The applier now validates that both user-defined and automatically included key parts are present in an event before using the index to search the data8)
- Replication: The write sets extracted by MySQL Replication from transactions when the transaction_write_set_extraction system variable is enabled (which is the default) are extracted from primary keys, unique keys, and foreign keys. They are used to detect dependencies and conflicts between transactions. Previously, write sets involving multi-column foreign keys were incorrectly identifying each column as a separate foreign key. The issue has now been fixed and foreign key write sets include all referenced key columns7,
- Replication: When row-based replication was in use, a replica could sometimes override the SQL mode value that was sent by the source, in an attempt to avoid issues with additional columns on the slave. In extreme cases this could lead to data divergence. The problem has been corrected so the replica now preserves the source’s SQL mode wherever possible8)
- Replication: The COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE column in the Performance Schema table replication_group_member_stats could persistently show transactions related to view change events (View_change_log_event) that had already been applied. These events are queued in the Group Replication applier channel but applied in the Group Replication recovery channel, causing a race condition that could result in the counter decrement being lost. The increment of the count now takes place at a more suitable point, and the counter for COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE is also now set to zero when the applier is not busy4,
- Replication: The message that is recorded when a member tries to rejoin a Group Replication topology when there is an old incarnation of the same server still present, has been upgraded from an informational note to a warning message4)
- Replication: MySQL’s semisynchronous replication did not respect the value of the net_read_timeout system variable and forced a read timeout of one millisecond. This could result in the function experiencing partial reads of acknowledgment messages and packets arriving out of order, while other connections in the MySQL system were functioning correctly. The value of the net_read_timeout system variable is now applied to connections for semisynchronous replication
- Replication: When the --replicate-same-server-id option was used to make the replica not skip events that have its own server ID, if the log file was rotated, replication stopped with an error. The log rotation event now checks and applies the current value of the option

API: Applications that previously used the MySQL client library to perform an automatic reconnection to the server received the following mysql_query error after the server was upgraded:
- [4031] The client was disconnected by the server because of inactivity. See wait_timeout and interactive_timeout for configuring this behavior
- Pushing a condition down to a derived table was not handled correctly in all cases0)
- After pushing down a condition to a derived table having a set operation, while folding an always true boolean condition, the rewrite was not correct due to not setting abort_on_null to true for the cloned condition when making a copy during condition pushdown to a derived table with a set operation8)
- A missing error return when processing an invalid ORDER BY expression in a view definition led to an assert in debug builds6)
- MySQL Server would not compile with the latest version of Visual Studio 20229)
- While attempting to clone a system variable during condition pushdown, the server sometimes could not determine the correct context of the cloned expression.
- To prevent this, we disallow condition pushdown to derived tables when they use system variables, or if the underlying expressions in the derived table contain system variables9)
- Added Enterprise Linux 9 (EL9) support4)
- On macOS 11, MySQL Server did not have the correct entitlement to generate a core dump in the event of an unexpected server halt. A build option WITH_DEVELOPER_ENTITLEMENTS has been added to allow a build to generate core dumps7)
- Improved error handling for '-DWITH_LIBEVENT=system' and '-DWITH_EDITLINE=system' on systems missing libevent-devel or libedit-devel4,
- The fix for in MySQL 8.0.29 addressed the situation where if a MySQL instance stopped unexpectedly or was restarted shortly after a SET PERSIST statement was used to record system variable settings, the configuration file mysqld-auto.cnf could be left empty, in which case the server restart could not proceed. The persisted system variables are now written to a backup file, which is only renamed to mysqld-auto.cnf after the success of the write has been verified, leaving the original mysqld-auto.cnf file still available. On a restart, if a backup file with valid contents is found, the server reads from that file. Otherwise the mysqld-auto.cnf file is used and the backup file is deleted. The file was not flushed to disk by this fix, so it was still possible for the issue to occur. This patch adds those operations6)
- Fixed the -DENABLE_GCOV CMake option3)
- The SENSITIVE_VARIABLES_OBSERVER privilege, introduced in MySQL 8.0.29, is now granted to users with the SYSTEM_VARIABLES_ADMIN privilege during upgrade. Previously, the privilege was not granted to any database user during upgrade8)
- A select from a view that used left joins did not return any results9)
- Under certain circumstances TRUNCATE performance_schema.accounts caused duplicated counts in global_status.
- This occurred if some hosts were not instrumented. For example, if performance_schema_hosts_size was set to a low value.
- Our thanks to Yuxiang Jiang and the Tencent team for the contribution3,  It was possible under certain conditions for EXPLAIN ANALYZE to attempt access of an iterator that did not exist1)
- References: This issue is a regression of:
- Support was added for compiling the keyring_oci plugin with OpenSSL 33)
- Events recorded in the Performance Schema tables for thread creation and deletion were retained until server shutdown, instead of being removed when the client connection ended. Thread creation and deletion now takes place after the Performance Schema instrumentation is created for the user session, so it is cleaned up when the session ends8)
- Upgraded the bundled zlib library to zlib 1.2.12. Also made zlib 1.2.12 the minimum zlib version supported, and removed WITH_ZLIB from the WITH_SYSTEM_LIBS CMake option0)
- The CONNECTION_ID() function, since it returns a session ID which remains constant for the lifetime of the session, was treated as a constant function. This caused issues when CONNECTION_ID() was used inside a trigger attached to a table which might be reused by other sessions. We fix this by making the function const for execution, and returning the actual session ID when the function is evaluated6)
- Executed codespell on the majority of source code, and fixed the reported spelling errors in the code comments9)
- The MySQL Enterprise Encryption openssl_udf function library plugin was reimplemented to use OpenSSL 3 APIs5)
- FEDERATED storage engine code was revised to address NULL pointer and variable access issues7)
- Histograms in MySQL returned a selectivity estimate of 0 for values that outside buckets. This meant that values might be missing from the histogram because they were missed during sampling, or because the histogram had grown stale. To prevent this, we introduce a constant lower bound of 0.001 on the selectivity estimates produced by histograms. This choice of lower bound corresponds to the selectivity of a value or range that we are likely to miss during sampling.
- Using a constant lower bound rather than a statistical estimate for the selectivity of a missing value has the advantage of simplicity and predictability, and provides some protection against underestimating the selectivity due to stale histograms and within-bucket heuristics.
- For more information about histograms in MySQL, see Optimizer Statistics7)

For certain queries using a common table expression (CTE), EXPLAIN ANALYZE did not provide any profiling data for the CTE even when the CTE was known to be executed. This happened when the following conditions were met:
- The CTE was referenced more than once in the query plan.
- The first reference to the CTE (in the order of the output of EXPLAIN FORMAT=TREE) was never executed.
- At least one of the subsequent references was executed at least once.
- The problem was that the CTE plan was always printed when encountering the first reference to the CTE; if that reference was never executed, the CTE was not materialized there; and thus there was no profiling data to print.
- The fix for this issue ensures that we print the CTE plan when it is first executed, that is, the point at which it is materialized. The output then includes profiling data. If the CTE is never executed, we print the plan at the last reference, when there is no profiling data9)
- The output from the command mysqld --verbose --help previously showed plugin load options as ON even when they were off by default, or turned off using an option. The output now shows the current value for the plugin2)
- The Server now bundles curl (7.83.1) and only uses it when alternative SSL systems are used, such as openssl11 on EL77,
- Debug MySQL binaries can now be built using -0g and -fno-inline3)
- The FIREWALL_EXEMPT privilege, introduced in MySQL 8.0.27, is now granted to users with the SYSTEM_USER during upgrade. Previously, the privilege was not granted to any database user during upgrade9)
- A correlated subquery did not use a functional index as expected. This occurred when an outer column reference used inside the subquery was not considered as constant for subquery execution, which allowed consideration of the functional index to be skipped.
- We fix this problem by making sure to consider the outer column reference as constant while executing the subquery5)
- Added alternate OpenSSL system package support by passing in openssl11 on EL7 or openssl3 on EL8 to the WITH_SSL Cmake option. Authentication plugins, such as LDAP and Kerberos, are disabled as they do not support these alternative versions of OpenSSL4)
- Prepared statements with subqueries that accessed no tables, but the subquery evaluation raised an error, triggered an assert failure in debug builds9)
- Some stored functions were not executed correctly following the first invocation3)
- When performing a query using a recursive common table expression (CTE) with a removal of a query expression after constant predicate elimination, it is expected that when the reference count of table objects for the CTE temporary table reaches zero, it should be possible once again to recreate the table, but in certain cases one of the table references was not properly recorded as attached to the CTE3)
- References: See also:
- Added a missing error return to the parser2)
- A number of issues with pushdown of conditions making use of outer references, relating to work done in MySQL 8.0.22 to implement condition pushdown for materialized derived tables, have been identified and resolved3,    
- The plan generated for a SELECT using a common table expression involves table materialization and an index scan on the materialized table. Because the temptable engine does not yet support all index scan methods, such queries might not always execute correctly.
- With other MySQL engines, the materialization access path has special handling when the access path is not considered basic; for temptable, an index scan was not considered basic, which led to undefined behavior.
- We fix this issue by considering the index scan access path basic, and thus avoiding use of any index scan access methods on temptable tables5)
- The Data_free column in the INFORMATION_SCHEMA.FILES table was not updated after adding a new data file to the InnoDB system tablespace4)
- If a plugin attempted to register a system variable with a name that duplicated that of an existing system variable, the existing static system variable might be overwritten, and uninstalling the plugin might leave pointers to the freed memory. The issues have now been fixed1)
- SHOW TABLES and SELECT * FROM INFORMATION_SCHEMA.TABLES did not return any results from the Performance Schema if the user had access privileges on individual Performance Schema tables, only9)
- Calling a function relating to the data_masking plugin without first installing the plugin led to an unplanned server shutdown. Functions relating to this plugin are initialized by calling init functions which in turn access the UDF metadata service, but this is valid only when the data masking plugin is installed. We fix this problem by adding a check to verify that the plugin is installed before initializing such functions, and to return an appropriate error message if the plugin providing them is not installed6)
- Under certain conditions, the server did not handle the expiration of max_execution_time or the execution of a KILL statement correctly5)
- mysqlslap, which uses multiple threads to connect to the server, could not run with a user account that used FIDO authentication. The issue has been fixed by an update to the FIDO library allowing the authentication to be performed on multiple threads3)
- A deadlock could occur in Group Replication when a member was interacting with the service infrastructure, such as a joining member checking for incompatible configuration and then leaving the group due to it. The issue has now been fixed1)
- If an incorrect value was set for the binlog_checksum system variable during a session, a COM_BINLOG_DUMP command made in the same session to request a binary log stream from a source failed. The server now validates the specified checksum value before starting the checksum algorithm setup process9)
- For slow query logging, the Slow_queries status variable was not implemented unless the slow query log was enabled, contrary to the documentation0,

A prepared statement could accept an empty string as a valid float value, which represents a regression from 8.0.27 behavior. This fix explicitly checks that the length of an interpreted string is non-empty and fully interpreted as a (float) number. In addition, new verification now ensures that:
- All numeric values are supported with empty strings and strings that are all spaces.
- Regular numeric values are supported, as well as numeric values with leading and trailing spaces.
- (
- References: This issue is a regression of:
- Upgrading to MySQL 8.0.29 led to issues with existing spatial indexes (see Creating Spatial Indexes). The root cause of the problem was a change in how geographic area computations were performed by the included Boost library, which was upgraded to version 1.77.0 in MySQL 8.0.29. We fix this by ensuring that we accommodate the new method whenever such computations are performed
- References: This issue is a regression of:
- When pushing a condition down to derived table for prepared statements, we clone a condition which also includes parameters when a derived table contains unions. When a statement needed to be reprepared during execution—for example, when the signedness of the value specified did not match that of the actual datatype—the parameter was not cloned correctly resulting in errors. This occurred because the value specified for the parameter was used to print the string for reparsing, instead of a literal ? placeholder character.
- Now in such cases we set a flag QT_NO_DATA_EXPANSION for printing parameters for reparsing which, when enabled, causes the ? placeholder to be printed, rather than the actual value
- On MacOS, improved Boost library detection logic for Homebrew as a potentially incompatible system's Boost version could get used even with -DWITH_BOOST set
- References: This issue is a regression of:
- On RHEL 7.x, fetching the CPU cache line size returned 0 on s390x RHEL 7.x which caused rpl_commit_order_queue and integrals_lockfree_queue to fail.
- Our thanks to Namrata Bhave for the contribution
- When the mysql client was unable to reconnect to the server following an unexpected server halt, the process of building the completion hash allocated memory that was not freed. The reconnection operation now does not build the completion hash if the client fails to reconnect, and the memory concerned is freed if the client is disconnected
- Added a cycle timer for the s390x architecture.
- Our thanks to Namrata Bhave for the contribution
- In certain cases, incorrect results could result from execution of a semijoin with materialization, when the WHERE clause of the subquery contained an equality. In some cases, such as when one side of such an equality was an IN or NOT IN subquery, the equality was neither pushed down to the materialized subquery, nor evaluated as part of the semijoin. This also caused issues with some inner hash joins  
- References: See also:
- Comparator functions for queries like (<date column> <non-date column>) IN ((val1, val2), (val3, val4), …) could return the wrong results
- Fixed an assert definition in SetOsLimitMaxOpenFiles; our thanks to hongyuan li for the contribution
- Previously, it was assumed that, when the same non-nullable expression was used as both the first and second arguments to LIKE, the result was always true, and so could be optimized away. This assumption turns out not to be valid, due to the fact that LIKE treats the backslash () as an escape character, even when ESCAPE is not specified. This led to different results when the condition was used in the SELECT list as opposed to the WHERE clause. To fix the problem, we no longer perform this optimization with LIKE, with or without an ESCAPE clause
- In some cases, when arguments other than global transaction IDs (such as column values) were passed to GTID_SUBSET(), the function returned values other than the expected NULL
- A problem with evaluation of general quantified comparison predicates occurred when the left-hand side of the predicate was NULL. In such cases, the value of the subquery evaluation from the last current row is saved, so that it does not need re-evaluation, but the cached value (result_for_null_param) was not cleared between executions, so that the next execution could re-use the result from the previous execution. One consequence of this was that, when a subquery execution first caused zero rows to match from the subquery—which for an ALL predicate should return TRUE—a subsequent execution causing at least one row to match also returned TRUE, even though FALSE was expected.
- To solve this issue, we now make sure to clear result_for_null_param while cleaning up the subquery predicate following execution
- Test cases executed with the --async-client option and shutdown commands caused mysqltest to halt unexpectedly
- MySQL supports the use of equiheight histograms to improve selectivity estimates. Each bucket in an equiheight histogram for a column should contain roughly the same number of values (rows); keeping the buckets small helps minimize any error.
- When constructing an equiheight histogram, too many values were sometime placed in the same bucket, which could result in substantial errors in selectivity estimation. We fix this by introducing a new equiheight construction algorithm that guarantees low error, and adapts to the distribution of the data to make efficient use of its buckets. In addition, a new estimator for the number of distinct values in histogram buckets provides improved worst-case error guarantees.
- See The INFORMATION_SCHEMA COLUMN_STATISTICS Table, and Optimizer Statistics, for more information
- Deprecation warnings returned to client programs were sent to stdout rather than stderr, which in the case of mysqldump could mean that the dump file no longer worked because the warnings were included in it. The issue has now been fixed and the warnings are sent to stderr
- Extended support for chained SSL certificates