Skip to content
This repository was archived by the owner on Feb 27, 2025. It is now read-only.
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,15 @@ connectionProperties = {

For more information and explanation, visit the closed [issue](https://github.com/microsoft/sql-spark-connector/issues/26).

### datetime2(0) will result in com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed

This issue arises from Spark not supporting datetime2.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SQL Datetime data type only allows 3 digits fractional seconds while spark dataframe might have more digits than datatime allows. The two workarounds, 1. truncate datetime in spark dataframe to 3 digits of milliseconds. 2. sql table column use datatime2 data type, which allows 7 digits of fractional seconds.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @luxu1-ms , the workarounds are clear and are documented in the two references issues. The points - if your SQL Table has datetime2(0) datetype it just does not work - so this needs to be documented!
If you can not change the SQL Table away from datetime2(0) to any other datetime2(x) where x>0 the connector wont work!

For more information see details in closed [issue](https://github.com/microsoft/sql-spark-connector/issues/39) and [issue] (https://github.com/microsoft/sql-spark-connector/issues/83).

For datetime2(0) you need to workarround and change your SQL table structure to a datetime2(x) where x>0.

This will only resolve when the Spark pull request (https://github.com/apache/spark/pull/32655) is incorporated in your Spark environment.

## Get Started

The Apache Spark Connector for SQL Server and Azure SQL is based on the Spark DataSourceV1 API and SQL Server Bulk API and uses the same interface as the built-in JDBC Spark-SQL connector. This allows you to easily integrate the connector and migrate your existing Spark jobs by simply updating the format parameter with `com.microsoft.sqlserver.jdbc.spark`.
Expand Down