Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCSP-31503 - 10.2 Spark Features #171

Merged
merged 1 commit into from
Jul 20, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 14 additions & 3 deletions source/release-notes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,23 @@
Release Notes
=============

MongoDB Connector for Spark 10.2.0
----------------------------------
MongoDB Connector for Spark 10.2
--------------------------------

The 10.2 connector release includes the following new features:

- Added the ``ignoreNullValues`` write configuration property, which enables you
- Added the ``ignoreNullValues`` write-configuration property, which enables you
to control whether the connector ignores null values. In previous versions,
the connector always wrote ``null`` values to MongoDB.
- Added options for the ``convertJson`` write-configuration property.
- Added the ``change.stream.micro.batch.max.partition.count`` read-configuration property,
which allows you to divide micro-batches into multiple partitions for parallel
processing.
- Improved change stream schema inference when using the
``change.stream.publish.full.document.only`` read-configuration property.
- Added the ``change.stream.startup.mode`` read-configuration property, which specifies
how the connector processes change events when no offset is available.
- Support for adding a comment to operations.

MongoDB Connector for Spark 10.1.1
----------------------------------
Expand Down