Kaspersky Lab Products Remover 歷史舊版本 Page2

最新版本 Kaspersky Lab Products Remover 1.0.4000

Kaspersky Lab Products Remover 歷史版本列表

通過開始> 刪除卡巴斯基實驗室產品時,可能會發生一些錯誤。控制面板> 添加刪除程序(添加 / 刪除程序)。因此,應用程序可能無法正確卸載,或者應用程序的剩餘部分可能會保留在系統中。為了完全刪除已安裝的卡巴斯基實驗室產品,請使用 kavremover 實用程序。卡巴斯基實驗室產品卸妝可輕鬆從您的 Windows 刪除卡巴斯基實驗室產品! 刪除實用程序可以完全卸載以下產品: 卡巴斯基小型辦... Kaspersky Lab Products Remover 軟體介紹


NVIDIA CUDA Toolkit 12.0.1 (for Windows 11) 查看版本資訊

更新時間:2023-03-01
更新細節:

What's new in this version:

New meta-packages for Linux installation:
- cuda-toolkit
- Installs all CUDA Toolkit packages required to develop CUDA applications
- Handles upgrading to the latest version of CUDA when it’s released
- Does not include the driver
- cuda-toolkit-12
- Installs all CUDA Toolkit packages required to develop CUDA applications
- Handles upgrading to the next 12.x version of CUDA when it’s released
- Does not include the driver
- New CUDA API to enable mini core dump programmatically is now available

.2.2. CUDA Compilers:
- NVCC has added support for host compiler: GCC 12.2, NVC++ 22.11, Clang 15.0, VS2022 17.4
- Breakpoint and single stepping behavior for a multi-line statement in device code has been improved, when code is compiled with nvcc using gcc/clang host compiler compiler or when compiled with NVRTC on non-Windows platforms. The debugger will now correctly breakpoint and single-step on each source line of the multiline source code statement.
- PTX has exposed a new special register in the public ISA, which can be used to query total size of shared memory which includes user shared memory and SW reserved shared memory.
- NVCC and NVRTC now show preprocessed source line and column info in a diagnostic to help users to understand the message and identify the issue causing the diagnostic. The source line and column info can be turned off with --brief-diagnostics=true.

.2.3. CUDA Developer Tools:
- For changes to nvprof and Visual Profiler, see the changelog
- For new features, improvements, and bug fixes in CUPTI, see the changelog
- For new features, improvements, and bug fixes in Nsight Compute, see the changelog
- For new features, improvements, and bug fixes in Compute Sanitizer, see the changelog
- For new features, improvements, and bug fixes in CUDA-GDB, see the changelog

.3. Deprecated or Dropped Features:
- Features deprecated in the current release of the CUDA software still work in the current release, but their documentation may have been removed, and they will become officially unsupported in a future release. We recommend that developers employ alternative solutions to these features in their software.

General CUDA:
- CentOS Linux 8 reached End-of-Life on December 31, 2021. Support for this OS is now removed from the CUDA Toolkit and is replaced by Rocky Linux 8.
- Server 2016 support has been deprecated and shall be removed in a future release
- Kepler architecture support is removed from CUDA 12.0
- CUDA 11 applications that relied on Minor Version Compatibility are not guaranteed to work in CUDA 12.0 onwards. Developers will either need to statically link their applications, or recompile within the CUDA 12.0 environment to ensure continuity of development.

From 12.0, JIT LTO support is now part of CUDA Toolkit. JIT LTO support in the CUDA Driver through the cuLink driver APIs is officially deprecated. Driver JIT LTO will be available only for 11.x applications. The following enums supported by the cuLink Driver APIs for JIT LTO are deprecated:
- CU_JIT_INPUT_NVVM
- CU_JIT_LTO
- CU_JIT_FTZ
- CU_JIT_PREC_DIV
- CU_JIT_PREC_SQRT
- CU_JIT_FMA
- CU_JIT_REFERENCED_KERNEL_NAMES
- CU_JIT_REFERENCED_KERNEL_COUNT
- CU_JIT_REFERENCED_VARIABLE_NAMES
- CU_JIT_REFERENCED_VARIABLE_COUNT
- CU_JIT_OPTIMIZE_UNUSED_DEVICE_VARIABLES
- Existing 11.x CUDA applications using JIT LTO will continue to work on the 12.0/R525 and later driver. The driver cuLink API support for JIT LTO is not removed but will only support 11.x LTOIR. The cuLink driver API enums for JIT LTO may be removed in the future so we recommend transitioning over to CUDA Toolkit 12.0 for JIT LTO.
- .0 LTOIR will not be supported by the driver cuLink APIs. 12.0 or later applications must use nvJitLink shared library to benefit from JIT LTO.
- Refer to the CUDA 12.0 blog on JIT LTO for more details

CUDA Tools:
- CUDA-MEMCHECK is removed from CUDA 12.0, and has been replaced with Compute Sanitizer

CUDA Compiler:
- bit compilation native and cross-compilation is removed from CUDA 12.0 and later Toolkit. Use the CUDA Toolkit from earlier releases for 32-bit compilation. CUDA Driver will continue to support running existing 32-bit applications on existing GPUs except Hopper. Hopper does not support 32-bit applications. Ada will be the last architecture with driver support for 32-bit applications.

NVIDIA CUDA Toolkit 12.0.1 (for Windows 10) 查看版本資訊

更新時間:2023-03-01
更新細節:

What's new in this version:

New meta-packages for Linux installation:
- cuda-toolkit
- Installs all CUDA Toolkit packages required to develop CUDA applications
- Handles upgrading to the latest version of CUDA when it’s released
- Does not include the driver
- cuda-toolkit-12
- Installs all CUDA Toolkit packages required to develop CUDA applications
- Handles upgrading to the next 12.x version of CUDA when it’s released
- Does not include the driver
- New CUDA API to enable mini core dump programmatically is now available

.2.2. CUDA Compilers:
- NVCC has added support for host compiler: GCC 12.2, NVC++ 22.11, Clang 15.0, VS2022 17.4
- Breakpoint and single stepping behavior for a multi-line statement in device code has been improved, when code is compiled with nvcc using gcc/clang host compiler compiler or when compiled with NVRTC on non-Windows platforms. The debugger will now correctly breakpoint and single-step on each source line of the multiline source code statement.
- PTX has exposed a new special register in the public ISA, which can be used to query total size of shared memory which includes user shared memory and SW reserved shared memory.
- NVCC and NVRTC now show preprocessed source line and column info in a diagnostic to help users to understand the message and identify the issue causing the diagnostic. The source line and column info can be turned off with --brief-diagnostics=true.

.2.3. CUDA Developer Tools:
- For changes to nvprof and Visual Profiler, see the changelog
- For new features, improvements, and bug fixes in CUPTI, see the changelog
- For new features, improvements, and bug fixes in Nsight Compute, see the changelog
- For new features, improvements, and bug fixes in Compute Sanitizer, see the changelog
- For new features, improvements, and bug fixes in CUDA-GDB, see the changelog

.3. Deprecated or Dropped Features:
- Features deprecated in the current release of the CUDA software still work in the current release, but their documentation may have been removed, and they will become officially unsupported in a future release. We recommend that developers employ alternative solutions to these features in their software.

General CUDA:
- CentOS Linux 8 reached End-of-Life on December 31, 2021. Support for this OS is now removed from the CUDA Toolkit and is replaced by Rocky Linux 8.
- Server 2016 support has been deprecated and shall be removed in a future release
- Kepler architecture support is removed from CUDA 12.0
- CUDA 11 applications that relied on Minor Version Compatibility are not guaranteed to work in CUDA 12.0 onwards. Developers will either need to statically link their applications, or recompile within the CUDA 12.0 environment to ensure continuity of development.

From 12.0, JIT LTO support is now part of CUDA Toolkit. JIT LTO support in the CUDA Driver through the cuLink driver APIs is officially deprecated. Driver JIT LTO will be available only for 11.x applications. The following enums supported by the cuLink Driver APIs for JIT LTO are deprecated:
- CU_JIT_INPUT_NVVM
- CU_JIT_LTO
- CU_JIT_FTZ
- CU_JIT_PREC_DIV
- CU_JIT_PREC_SQRT
- CU_JIT_FMA
- CU_JIT_REFERENCED_KERNEL_NAMES
- CU_JIT_REFERENCED_KERNEL_COUNT
- CU_JIT_REFERENCED_VARIABLE_NAMES
- CU_JIT_REFERENCED_VARIABLE_COUNT
- CU_JIT_OPTIMIZE_UNUSED_DEVICE_VARIABLES
- Existing 11.x CUDA applications using JIT LTO will continue to work on the 12.0/R525 and later driver. The driver cuLink API support for JIT LTO is not removed but will only support 11.x LTOIR. The cuLink driver API enums for JIT LTO may be removed in the future so we recommend transitioning over to CUDA Toolkit 12.0 for JIT LTO.
- .0 LTOIR will not be supported by the driver cuLink APIs. 12.0 or later applications must use nvJitLink shared library to benefit from JIT LTO.
- Refer to the CUDA 12.0 blog on JIT LTO for more details

CUDA Tools:
- CUDA-MEMCHECK is removed from CUDA 12.0, and has been replaced with Compute Sanitizer

CUDA Compiler:
- bit compilation native and cross-compilation is removed from CUDA 12.0 and later Toolkit. Use the CUDA Toolkit from earlier releases for 32-bit compilation. CUDA Driver will continue to support running existing 32-bit applications on existing GPUs except Hopper. Hopper does not support 32-bit applications. Ada will be the last architecture with driver support for 32-bit applications.

Kaspersky Lab Products Remover 1.0.3029 查看版本資訊

更新時間:2023-02-09
更新細節:

NVIDIA CUDA Toolkit 12.0.0 (for Windows 11) 查看版本資訊

更新時間:2022-12-09
更新細節:

What's new in this version:

General CUDA:
- CUDA 12.0 exposes programmable functionality for many features of the Hopper and Ada Lovelace architectures:

Many tensor operations now available via public PTX:
- TMA operations
- TMA bulk operations
- 32x Ultra xMMA (including FP8/FP16)
- Membar domains in Hopper, controlled via launch parameters
- Smem sync unit PTX and C++ API support
- Introduced C intrinsics for Cooperative Grid Array (CGA) relaxed barrier support
- Programmatic L2 Cache to SM multicast (Hopper-only)
- Public PTX for SIMT collectives - elect_one
- Genomics/DPX instructions now available for Hopper GPUs to provide faster combined-math arithmetic operations (three-way max, fused add+max, etc.)

Enhancements to the CUDA graphs API:
- You can now schedule graph launches from GPU device-side kernels by calling built-in functions. With this ability, user code in kernels can dynamically schedule graph launches, greatly increasing the flexibility of CUDA graphs.
- The cudaGraphInstantiate() API has been refactored to remove unused parameters
- Added the ability to use virtual memory management (VMM) APIs such as cuMemCreate() with GPUs masked by CUDA_VISIBLE_DEVICES
- Application and library developers can now programmatically update the priority of CUDA streams
- CUDA 12.0 adds support for revamped CUDA Dynamic Parallelism APIs, offering substantial performance improvements vs. the legacy CUDA Dynamic Parallelism APIs

Added new APIs to obtain unique stream and context IDs from user-provided objects:
- cuStreamGetId(CUstream hStream, unsigned long long *streamId)
- cuCtxGetId(CUcontext ctx, unsigned long long *ctxId)
- Added support for read-only cuMemSetAccess() flag CU_MEM_ACCESS_FLAGS_PROT_READ

CUDA Compilers:
- JIT LTO support is now officially part of the CUDA Toolkit through a separate nvJitLink library. A technical deep dive blog will go into more details. Note that the earlier implementation of this feature has been deprecated. Refer to the Deprecation/Dropped Features section below for details.

New host compiler support:
- GCC 12.1 (Official) and 12.2.1 ( Experimental)
- VS 2022 17.4 Preview 3 fixes compiler errors mentioning an internal function std::_Bit_cast by using CUDA’s support for __builtin_bit_cast
- NVCC and NVRTC now support the c++20 dialect. Most of the language features are available in host and device code; some such as coroutines are not supported in device code. Modules are not supported for both host and device code. Host Compiler Minimum Versions: GCC 10, Clang 11, VS2022, Arm C/C++ 22.x. Refer to the individual Host Compiler documentation for other feature limitations. Note that a compilation issue in C++20 mode with <complex> header mentioning an internal function std::_Bit_cast is resolved in VS2022 17.4.
- NVRTC default C++ dialect changed from C++14 to C++17. Refer to the ISO C++ standard for reference on the feature set and compatibility between the dialects.
- NVVM IR Update: with CUDA 12.0 we are releasing NVVM IR 2.0 which is incompatible with NVVM IR 1.x accepted by the libNVVM compiler in prior CUDA toolkit releases. Users of the libNVVM compiler in CUDA 12.0 toolkit must generate NVVM IR 2.0.

NVIDIA CUDA Toolkit 12.0.0 (for Windows 10) 查看版本資訊

更新時間:2022-12-09
更新細節:

What's new in this version:

General CUDA:
- CUDA 12.0 exposes programmable functionality for many features of the Hopper and Ada Lovelace architectures:

Many tensor operations now available via public PTX:
- TMA operations
- TMA bulk operations
- 32x Ultra xMMA (including FP8/FP16)
- Membar domains in Hopper, controlled via launch parameters
- Smem sync unit PTX and C++ API support
- Introduced C intrinsics for Cooperative Grid Array (CGA) relaxed barrier support
- Programmatic L2 Cache to SM multicast (Hopper-only)
- Public PTX for SIMT collectives - elect_one
- Genomics/DPX instructions now available for Hopper GPUs to provide faster combined-math arithmetic operations (three-way max, fused add+max, etc.)

Enhancements to the CUDA graphs API:
- You can now schedule graph launches from GPU device-side kernels by calling built-in functions. With this ability, user code in kernels can dynamically schedule graph launches, greatly increasing the flexibility of CUDA graphs.
- The cudaGraphInstantiate() API has been refactored to remove unused parameters
- Added the ability to use virtual memory management (VMM) APIs such as cuMemCreate() with GPUs masked by CUDA_VISIBLE_DEVICES
- Application and library developers can now programmatically update the priority of CUDA streams
- CUDA 12.0 adds support for revamped CUDA Dynamic Parallelism APIs, offering substantial performance improvements vs. the legacy CUDA Dynamic Parallelism APIs

Added new APIs to obtain unique stream and context IDs from user-provided objects:
- cuStreamGetId(CUstream hStream, unsigned long long *streamId)
- cuCtxGetId(CUcontext ctx, unsigned long long *ctxId)
- Added support for read-only cuMemSetAccess() flag CU_MEM_ACCESS_FLAGS_PROT_READ

CUDA Compilers:
- JIT LTO support is now officially part of the CUDA Toolkit through a separate nvJitLink library. A technical deep dive blog will go into more details. Note that the earlier implementation of this feature has been deprecated. Refer to the Deprecation/Dropped Features section below for details.

New host compiler support:
- GCC 12.1 (Official) and 12.2.1 ( Experimental)
- VS 2022 17.4 Preview 3 fixes compiler errors mentioning an internal function std::_Bit_cast by using CUDA’s support for __builtin_bit_cast
- NVCC and NVRTC now support the c++20 dialect. Most of the language features are available in host and device code; some such as coroutines are not supported in device code. Modules are not supported for both host and device code. Host Compiler Minimum Versions: GCC 10, Clang 11, VS2022, Arm C/C++ 22.x. Refer to the individual Host Compiler documentation for other feature limitations. Note that a compilation issue in C++20 mode with <complex> header mentioning an internal function std::_Bit_cast is resolved in VS2022 17.4.
- NVRTC default C++ dialect changed from C++14 to C++17. Refer to the ISO C++ standard for reference on the feature set and compatibility between the dialects.
- NVVM IR Update: with CUDA 12.0 we are releasing NVVM IR 2.0 which is incompatible with NVVM IR 1.x accepted by the libNVVM compiler in prior CUDA toolkit releases. Users of the libNVVM compiler in CUDA 12.0 toolkit must generate NVVM IR 2.0.

Keyman Developer 15.0.274 查看版本資訊

更新時間:2022-11-30
更新細節:

Kaspersky Lab Products Remover 1.0.2686 查看版本資訊

更新時間:2022-11-11
更新細節:

NVIDIA CUDA Toolkit 11.8.0 (for Windows 11) 查看版本資訊

更新時間:2022-10-04
更新細節:

NVIDIA CUDA Toolkit 11.8.0 (for Windows 10) 查看版本資訊

更新時間:2022-10-04
更新細節:

Kaspersky Lab Products Remover 1.0.2376 查看版本資訊

更新時間:2022-08-18
更新細節: