Skip to content

Releases: googleapis/java-bigtable-hbase

bigtable-client-0.9.1

21 Jul 19:39
Compare
Choose a tag to compare
  • You must now use version 1.1.33.Fork19 of the netty-tcnative-boringssl-static library.
  • You can now use a single netty-tcnative-boringssl-static JAR file for all supported platforms (Linux, OS X, and Windows). See "Setting up encryption" for instructions on how to set up your pom.xml file.
  • Added two new options, which you can set in your hbase-site.xml file or programmatically:
    • google.bigtable.rpc.use.timeouts: Determines whether RPCs will time out after a specified number of milliseconds. Set to false (default) or true.
    • google.bigtable.rpc.timeout.ms: The timeout value, in milliseconds. Defaults to 60000 (1 minute) for consistency with HBase. For small gets and puts, 2000 is a reasonable timeout value. For other workloads, you will need to experiment to find an appropriate timeout value.
    • Fixed an issue that caused IPv6 addresses to be used in environments that do not support IPv6, such as Docker containers that are configured to use only IPv4.
  • Calls to getTable() are now five to ten times faster. (changed from microseconds to hundreds of nanoseconds)
  • Added a BigtableInstanceClient, accessible from BigtableSession, that you can use for administering your instance and cluster.

bigtable-client-0.9.0

29 Jun 03:48
Compare
Choose a tag to compare
  • Updated implementation to use the new Cloud Bigtable instance API and removed the cluster API implementation. Using this new client will require changing the configuration to work with the new API.
  • Updated Cloud Dataflow version to 1.6
  • Added dynamic rebalancing integration in the Cloud Dataflow's support for reading from Cloud Bigtable. Dynamic rebalancing allows Cloud Dataflow to split shards that are performing slower than other shards into smaller shards so that the job finishes faster, as described in this blog post.
  • Added support for FuzzyRowFilter

version 0.3.0

02 May 18:10
Compare
Choose a tag to compare
  • The method BufferedMutator#mutate() now uses bulk mutations, which improve throughput.
  • You can now perform bulk mutations without using the HBase API by calling the method BigtableSession#createBulkMutation().
  • Bulk mutations have been made more robust by retrying failed individual mutations in bulk.
  • You can now perform bulk reads without using the HBase API by calling the method BigtableSession#createBulkRead().
  • You can now delete row ranges in bulk by calling the method BigtableTableAdminGrpcClient#bulkDeleteRows() or using the HBase API's truncateTable() methods. In addition, the HBase API's truncateTable() methods no longer drop and recreate the table.
  • The method AbstractCloudBigtableTableFn#retrowException() has been renamed to AbstractCloudBigtableTableFn#rethrowException(). If you have created a subclass of AbstractCloudBigtableTableFn that uses this method, you must rename the method in your subclass.
  • You can now call the method org.apache.hadoop.hbase.client.Admin#modifyColumn() to alter an existing column. In addition, the HBase shell's alter command now allows you to alter an existing column.
  • The Google authentication library was updated to fix issues with long-running jobs in App Engine's flexible environment.

version 0.2.4

28 Apr 18:38
Compare
Choose a tag to compare
  • Retry logic is improved for scans, mutations (puts), and gets. Note: All HBase Puts are now automatically retried even if no timestamp is set.
  • Throughput is significantly increased for asynchronous (BufferedMutator) operations.
  • Latency is reduced for synchronous operations.
  • Resolved several dependency contentions when using the client with the gRPC library.
  • Cloud Bigtable operations, such as the HBase shell quickstart, can now be used in Google Cloud Shell.

Cloud Dataflow connector

  • You can now use the Cloud Dataflow connector with Cloud Dataflow 1.5. In addition, the connector uses larger batch sizes when you use Cloud Dataflow 1.5.
  • Added a utility class, AbstractCloudBigtableDoFn, to the Cloud Dataflow connector. You can use this utility class to get a BigtableConnection for doing additional Cloud Bigtable work. For example, you can use this class to create a DoFn that performs additional Gets while doing a table scan.
  • Added a new static method, com.google.cloud.bigtable.dataflow.CloudBigtableIO.readBulk(), that can read an array of elements at a time. You can use the array of elements to perform more efficient bulk processing than possible for a single element.
  • You can now use Cloud Dataflow to import Hadoop sequence files generated by HBase.
  • The Cloud Dataflow connector can now use Cloud Bigtable Sources across more workers. A Source will now support up to 4,000 splits.
  • The Cloud Dataflow connector no longer performs validation lookups, which were generating many unnecessary requests.

bigtable-client-0.2.3

07 Oct 21:06
Compare
Choose a tag to compare

Do not use this version. Use the latest instead. This version does not work in environments that have problems using IPv6, such as Docker and Dataflow.

version 0.2.2

20 Nov 21:52
Compare
Choose a tag to compare
  • The hbase-client artifact is no longer a required dependency and should be removed from your pom.xml file.
  • You can now use OpenSSL for encryption instead of adding the ALPN library to your boot classpath. Using OpenSSL typically results in a 5% to 20% performance improvement for encryption. See "Using OpenSSL encryption" for details.
  • There is now a helper class that simplifies the process of creating a connection. See "Including connection settings in your code" for details.
  • The RowFilter filter is now supported. With this filter, you can use either the BinaryComparator comparator or the RegexStringComparator comparator with no flags and the EQUAL operator.
  • Fixed an issue that could cause deletes to hang forever during scans.
  • When there are duplicate rows that contain the same row keys, column families, column qualifiers, and timestamps, the client now de-duplicates the rows.
  • You can now use the HBase shell's alter command to delete column families.
  • You can now use the HBase shell's truncate command, which disables, drops, and re-creates a table.

version 0.2.1

01 Oct 19:15
Compare
Choose a tag to compare
  • Java 8 is now supported.
  • A Cloud Dataflow connector for Cloud Bigtable is now available. See "Dataflow Connector for Cloud Bigtable" for details.
  • Fixed an issue with authentication tokens that could cause long-running applications to hang.
  • When you call Table.batch() to perform one or more Get operations, any Get operations that fail will now be retried.

version 0.2.0

26 Aug 19:38
Compare
Choose a tag to compare
  • Easier use in maven pom.xml files - removed the need for a “shaded” classifier and removed unnecessary dependencies.
  • Added a separate project for the bigtable Import map reduce project to remove the dependency in the bigtable-hbase-1.* projects on hbase-server.
  • Added PageFilter support
  • Added support for region splits on table creation via the Admin.createTable(HTableDescriptor desc, byte[][] splitKeys) method

version 0.1.9

14 Jul 17:57
Compare
Choose a tag to compare

Cloud-Bigtable-client-0.1.9

version 0.1.5

05 May 14:56
Compare
Choose a tag to compare

Cloud-Bigtable-client-0.1.5