Search results for requirements
Talend Component Kit provides a migration mechanism between two versions of a component to let you ensure backward compatibility. For example, a new version of a component may have some new options that need to be remapped, set with a default value in the older versions, or disabled. This tutorial shows how to create a migration handler for a component that needs to be upgraded from a version 1 to a version 2. The upgrade to the newer version includes adding new options to the component. This tutorial assumes that you know the basics about component development and are familiar with component project generation and implementation. To follow this tutorial, you need: Java 8 A Talend component development environment using Talend Component Kit. Refer to this document. Have generated a project containing a simple processor component using the Talend Component Kit Starter. First, create a simple processor component configured as follows: Create a simple configuration class that represents a basic authentication and that can be used in any component requiring this kind of authentication. Create a simple output component that uses the configuration defined earlier. The component configuration is injected into the component constructor. The version of the configuration class corresponds to the component version. By configuring these two classes, the first version of the component is ready to use a simple authentication mechanism. Now, assuming that the component needs to support a new authentication mode following a new requirement, the next steps are: Creating a version 2 of the component that supports the new authentication mode. Handling migration from the first version to the new version. The second version of the component needs to support a new authentication method and let the user choose the authentication mode he wants to use using a dropdown list. Add an Oauth2 authentication mode to the component in addition to the basic mode. For example: The options of the new authentication mode are now defined. Wrap the configuration created above in a global configuration with the basic authentication mode and add an enumeration to let the user choose the mode to use. For example, create an AuthenticationConfiguration class as follows: Using the @ActiveIf annotation allows to activate the authentication type according to the selected authentication mode. Edit the component to use the new configuration that supports an additional authentication mode. Also upgrade the component version from 1 to 2 as its configuration has changed. The component now supports two authentication modes in its version 2. Once the new version is ready, you can implement the migration handler that will take care of adapting the old configuration to the new one. What can happen if an old configuration is passed to the new component version? It simply fails, as the version 2 does not recognize the old version anymore. For that reason, a migration handler that adapts the old configuration to the new one is required. It can be achieved by defining a migration handler class in the @Version annotation of the component class. An old configuration may already be persisted by an application that integrates the version 1 of the component (Studio or web application). Add a migration handler class to the component version. Create the migration handler class MyOutputMigrationHandler. the incoming version, which is the version of the configuration that we are migrating from a map (key, value) of the configuration, where the key is the configuration path and the value is the value of the configuration. You need to be familiar with the component configuration path construction to better understand this part. Refer to Defining component layout and configuration. As a reminder, the following changes were made since the version 1 of the component: The configuration BasicAuth from the version 1 is not the root configuration anymore, as it is under AuthenticationConfiguration. AuthenticationConfiguration is the new root configuration. The component supports a new authentication mode (Oauth2) which is the default mode in the version 2 of the component. To migrate the old component version to the new version and to keep backward compatibility, you need to: Remap the old configuration to the new one. Give the adequate default values to some options. In the case of this scenario, it means making all configurations based on the version 1 of the component have the authenticationMode set to basic by default and remapping the old basic authentication configuration to the new one. if a configuration has been renamed between 2 component versions, you can get the old configuration option from the configuration map by using its old path and set its value using its new path. You can now upgrade your component without losing backward compatibility.
Before being able to develop components using Talend Component Kit, you need the right system configuration and tools. Although Talend Component Kit comes with some embedded tools, such as Maven and Gradle wrappers, you still need to prepare your system. A Talend Component Kit plugin for IntelliJ is also available and allows to design and generate your component project right from IntelliJ. System requirements Installing the IntelliJ plugin
Before implementing a component logic and configuration, you need to specify the family and the category it belongs to, the component type and name, as well as its name and a few other generic parameters. This set of metadata, and more particularly the family, categories and component type, is mandatory to recognize and load the component to Talend Studio or Cloud applications. Some of these parameters are handled at the project generation using the starter, but can still be accessed and updated later on. The family and category of a component is automatically written in the package-info.java file of the component package, using the @Components annotation. By default, these parameters are already configured in this file when you import your project in your IDE. Their value correspond to what was defined during the project definition with the starter. Multiple components can share the same family and category value, but the family + name pair must be unique for the system. A component can belong to one family only and to one or several categories. If not specified, the category defaults to Misc. The package-info.java file also defines the component family icon, which is different from the component icon. You can learn how to customize this icon in this section. Here is a sample package-info.java: Another example with an existing component: Components can require metadata to be integrated in Talend Studio or Cloud platforms. Metadata is set on the component class and belongs to the org.talend.sdk.component.api.component package. When you generate your project and import it in your IDE, icon and version both come with a default value. @Icon: Sets an icon key used to represent the component. You can use a custom key with the custom() method but the icon may not be rendered properly. The icon defaults to Check. Replace it with a custom icon, as described in this section. @Version: Sets the component version. 1 by default. Learn how to manage different versions and migrations between your component versions in this section. For example: Every component family and component needs to have a representative icon. You have to define a custom icon as follows: For the component family the icon is defined in the package-info.java file. For the component itself, you need to declare the icon in the component class. Custom icons must comply with the following requirements: Icons must be stored in the src/main/resources/icons folder of the project. Icon file names need to match one of the following patterns: IconName.svg or IconName_icon32.png. The latter will run in degraded mode in Talend Cloud. Replace IconName by the name of your choice. Icons must be squared, even for the SVG format. Note that SVG icons are not supported by Talend Studio and can cause the deployment of the component to fail. If you aim at deploying a custom component to Talend Studio, specify PNG icons or use the Maven (or Gradle) svg2png plugin to convert SVG icons to PNG. If you want a finer control over both images, you can provide both in your component. Ultimately, you can also remove SVG parameters from the talend.component.server.icon.paths property in the HTTP server configuration. Note that SVG icons are not supported by Talend Studio and can cause the deployment of the component to fail. If you aim at deploying a custom component to Talend Studio, specify PNG icons or use the Maven (or Gradle) svg2png plugin to convert SVG icons to PNG. If you want a finer control over both images, you can provide both in your component. Ultimately, you can also remove SVG parameters from the talend.component.server.icon.paths property in the HTTP server configuration. For any purpose, you can also add user defined metadatas to your component with the @Metadatas annotation. Example: You can also use a SPI implementing org.talend.sdk.component.spi.component.ComponentMetadataEnricher. Methodology for creating components Generating a project using the starter Managing component versions Defining an input component Defining a processor or output component Defining a driver runner component Defining component layout and configuration Best practices
Every component family and component needs to have a representative icon. You have to define a custom icon as follows: For the component family the icon is defined in the package-info.java file. For the component itself, you need to declare the icon in the component class. Custom icons must comply with the following requirements: Icons must be stored in the src/main/resources/icons folder of the project. Icon file names need to match one of the following patterns: IconName.svg or IconName_icon32.png. The latter will run in degraded mode in Talend Cloud. Replace IconName by the name of your choice. Icons must be squared, even for the SVG format. Note that SVG icons are not supported by Talend Studio and can cause the deployment of the component to fail. If you aim at deploying a custom component to Talend Studio, specify PNG icons or use the Maven (or Gradle) svg2png plugin to convert SVG icons to PNG. If you want a finer control over both images, you can provide both in your component. Ultimately, you can also remove SVG parameters from the talend.component.server.icon.paths property in the HTTP server configuration. Note that SVG icons are not supported by Talend Studio and can cause the deployment of the component to fail. If you aim at deploying a custom component to Talend Studio, specify PNG icons or use the Maven (or Gradle) svg2png plugin to convert SVG icons to PNG. If you want a finer control over both images, you can provide both in your component. Ultimately, you can also remove SVG parameters from the talend.component.server.icon.paths property in the HTTP server configuration.
Talend Component scanning is based on plugins. To make sure that plugins can be developed in parallel and avoid conflicts, they need to be isolated (component or group of components in a single jar/plugin).
Multiple options are available:
Graph classloading: this option allows you to link the plugins and dependencies together dynamically in any direction. For example, the graph classloading can be illustrated by OSGi containers.
Tree classloading: a shared classloader inherited by plugin classloaders. However, plugin classloader classes are not seen by the shared classloader, nor by other plugins. For example, the tree classloading is commonly used by Servlet containers where plugins are web applications.
Flat classpath: listed for completeness but rejected by design because it doesn’t comply with this requirement.
In order to avoid much complexity added by this layer, Talend Component Kit relies on a tree classloading. The advantage is that you don’t need to define the relationship with other plugins/dependencies, because it is built-in.
Here is a representation of this solution:
The shared area contains Talend Component Kit API, which only contains by default the classes shared by the plugins.
Then, each plugin is loaded with its own classloader and dependencies.
This section explains the overall way to handle dependencies but the Talend Maven plugin provides a shortcut for that.
A plugin is a JAR file that was enriched with the list of its dependencies. By default, Talend Component Kit runtime is able to read the output of maven-dependency-plugin in TALEND-INF/dependencies.txt. You just need to make sure that your component defines the following plugin:
Once build, check the JAR file and look for the following lines:
What is important to see is the scope related to the artifacts:
The APIs (component-api and geronimo-annotation_1.3_spec) are provided because you can consider them to be there when executing (they come with the framework).
Your specific dependencies (awesome-project in the example above) are marked as compile: they are included as needed dependencies by the framework (note that using runtime works too).
the other dependencies are ignored. For example, test dependencies.
Even if a flat classpath deployment is possible, it is not recommended because it would then reduce the capabilities of the components.
The way the framework resolves dependencies is based on a local Maven repository layout. As a quick reminder, it looks like:
This is all the layout the framework uses. The logic converts t-uple {groupId, artifactId, version, type (jar)} to the path in the repository.
Talend Component Kit runtime has two ways to find an artifact:
From the file system based on a configured Maven 2 repository.
From a fat JAR (uber JAR) with a nested Maven repository under MAVEN-INF/repository.
The first option uses either ${user.home}/.m2/repository (default) or a specific path configured when creating a ComponentManager. The nested repository option needs some configuration during the packaging to ensure the repository is correctly created.
To create the nested MAVEN-INF/repository repository, you can use the nested-maven-repository extension:
Plugins are usually programmatically registered. If you want to make some of them automatically available, you need to generate a TALEND-INF/plugins.properties file that maps a plugin name to coordinates found with the Maven mechanism described above.
You can enrich maven-shade-plugin to do it:
Here is a final job/application bundle based on maven-shade-plugin:
The configuration unrelated to transformers depends on your application.
ContainerDependenciesTransformer embeds a Maven repository and PluginTransformer to create a file that lists (one per line) artifacts (representing plugins).
Both transformers share most of their configuration:
session: must be set to ${session}. This is used to retrieve dependencies.
scope: a comma-separated list of scopes to include in the artifact filtering (note that the default will rely on provided but you can replace it by compile, runtime, runtime+compile, runtime+system or test).
include: a comma-separated list of artifacts to include in the artifact filtering.
exclude: a comma-separated list of artifacts to exclude in the artifact filtering.
userArtifacts: set of artifacts to include (groupId, artifactId, version, type - optional, file - optional for plugin transformer, scope - optional) which can be forced inline. This parameter is mainly useful for PluginTransformer.
includeTransitiveDependencies: should transitive dependencies of the components be included. Set to true by default. It is active for userArtifacts.
includeProjectComponentDependencies: should component project dependencies be included. Set to false by default. It is not needed when a job project uses isolation for components.
With the component tooling, it is recommended to keep default locations. Also if you need to use project dependencies, you can need to refactor your project structure to ensure component isolation. Talend Component Kit lets you handle that part but the recommended practice is to use userArtifacts for the components instead of project
By default, input components are designed to receive a one-time batch of data to process. By enabling the streaming mode, you can instead set your component to process a continuous incoming flow of data.
When streaming is enabled on an input component, the component tries to pull data from its producer. When no data is pulled, it waits for a defined period of time before trying to pull data again, and so on. This period of time between tries is defined by a strategy.
This document explains how to configure this strategy and the cases where it can fit your needs.
Before enabling streaming on your component, make sure that it fits the scope and requirements of your project and that regular batch processing cannot be used instead.
Streaming is designed to help you dealing with real-time or near real-time data processing cases, and should be used only for such cases. Enabling streaming will impact the performance when processing batches of data.
You can enable streaming right from the design phase of the project by enabling the Stream toggle in the basic configuration of your future component in the Component Kit Starter.
Doing so adds a default streaming-ready configuration to your component when generating the project. This default configuration implements a constant pause duration of 500 ms between retries, with no limit of retries.
Without any configuration, streaming components have an infinite lifecycle and will never stop. Sometimes, you may need to stop component after a certain amount of records read or time elapsed.
You can add configuration that helps you to stop the data reading in your input component when it reaches required limitations. To enable it you need to set true in PartitionMapper#stoppable. An important condition is that PartitionMapper#infinite should also be true.
Here’s a sample code:
There are two reading stop conditions (can be combined):
maxDurationMs : stop after n milliseconds elapsed.
maxRecords : stop after n records read.
See next subsections for configuring those values.
If you choose to use a stoppable streaming component, you will have certainly to adapt your code according the backend technology and how to read values. For instance, if in your component you read 100 values at once and the maxRecords value is 50, you may lose 50 values (if the 100 values were acknowledged).
In that case, to build a correct strategy in your component, you can access to stop condition values. To access these informations use the @PostConstruct method with @Option annotation in your Emitter class.
Available options:
Option.MAX_DURATION_PARAMETER : reflects the maxDurationMs parameter.
Option.MAX_RECORDS_PARAMETER : reflects the maxRecords parameter.
Here a code sample:
If your streaming connector (infinite=true) is defined with the stoppable=true, you will have a design time UI for specifying stop strategy:
Cloud
Studio
By default, in the setting those values are set to -1. It means infinity behavior.
At runtime, you can set system properties to apply the strategy. You need to prefix properties with the component’s family.
If some changes impact the configuration, they can be managed through a migration handler at the component level (enabling trans-model migration support). The @Version annotation supports a migrationHandler method which migrates the incoming configuration to the current model. For example, if the filepath configuration entry from v1 changed to location in v2, you can remap the value in your MigrationHandler implementation. A best practice is to split migrations into services that you can inject in the migration handler (through constructor) rather than managing all migrations directly in the handler. For example: What is important to notice in this snippet is the fact that you can organize your migrations the way that best fits your component. If you need to apply migrations in a specific order, make sure that they are sorted. Consider this API as a migration callback rather than a migration API. Adjust the migration code structure you need behind the MigrationHandler, based on your component requirements, using service injection. A nested configuration always migrates itself with any root prefix, whereas a component configuration always roots the full configuration. For example, if your model is the following: Then the component will see the path configuration.datastore.url for the datastore url whereas the datastore will see the path url for the same property. You can see it as configuration types - @DataStore, @DataSet - being configured with an empty root path.
The component API is declarative (through annotations) to ensure it is: Evolutive. It can get new features without breaking old code. As static as possible. Because it is fully declarative, any new API can be added iteratively without requiring any change to existing components. For example, in the case of Beam potential evolution: would not be affected by the addition of the new Timer API, which can be used as follows: The intent of the framework is to be able to fit in a Java UI as well as in a web UI. It must be understood as colocalized and remote UI. The goal is to move as much as possible the logic to the UI side for UI-related actions. For example, validating a pattern, a size, and so on, should be done on the client side rather than on the server side. Being static encourages this practice. The other goal of being static in the API definition is to ensure that the model will not be mutated at runtime and that all the auditing and modeling can be done before, at the design phase. Being static also ensures that the development can be validated as much as possible through build tools. This does not replace the requirement to test the components but helps developers to maintain components with automated tools. Refer to this document. The components must be able to execute even if they have conflicting libraries. For that purpose, classloaders must be isolated. A component defines its dependencies based on a Maven format and is always bound to its own classloader. The definition payload is as flat as possible and strongly typed to ensure it can be manipulated by consumers. This way, consumers can add or remove fields with simple mapping rules, without any abstract tree handling. The execution (runtime) configuration is the concatenation of framework metadata (only the version) and a key/value model of the instance of the configuration based on the definition properties paths for the keys. It enables consumers to maintain and work with the keys/values according to their need. The framework not being responsible for any persistence, it is very important to make sure that consumers can handle it from end to end, with the ability to search for values (update a machine, update a port and so on) and keys (for example, a new encryption rule on key certificate). Talend Component Kit is a metamodel provider (to build forms) and a runtime execution platform. It takes a configuration instance and uses it volatilely to execute a component logic. This implies it cannot own the data nor define the contract it has for these two endpoints and must let the consumers handle the data lifecycle (creation, encryption, deletion, and so on). A new mime type called talend/stream is introduced to define a streaming format. It matches a JSON object per line: Icons (@Icon) are based on a fixed set. Custom icons can be used but their display cannot be guaranteed. Components can be used in any environment and require a consistent look that cannot be guaranteed outside of the UI itself. Defining keys only is the best way to communicate this information. Once you know exactly how you will deploy your component in the Studio, then you can use `@Icon(value = CUSTOM, custom = "…") to use a custom icon file.
A Processor is a component that converts incoming data to a different model.
A processor must have a method decorated with @ElementListener taking an incoming data and returning the processed data:
Processors must be Serializable because they are distributed components.
If you just need to access data on a map-based ruleset, you can use Record or JsonObject as parameter type. From there, Talend Component Kit wraps the data to allow you to access it as a map. The parameter type is not enforced. This means that if you know you will get a SuperCustomDto, then you can use it as parameter type. But for generic components that are reusable in any chain, it is highly encouraged to use Record until you have an evaluation language-based processor that has its own way to access components.
For example:
A processor also supports @BeforeGroup and @AfterGroup methods, which must not have any parameter and return void values. Any other result would be ignored. These methods are used by the runtime to mark a chunk of the data in a way which is estimated good for the execution flow size.
Because the size is estimated, the size of a group can vary. It is even possible to have groups of size 1.
It is recommended to batch records, for performance reasons:
You can optimize the data batch processing by using the maxBatchSize parameter. This parameter is automatically implemented on the component when it is deployed to a Talend application. Only the logic needs to be implemented. You can however customize its value setting in your LocalConfiguration the property _maxBatchSize.value - for the family - or ${component simple class name}._maxBatchSize.value - for a particular component, otherwise its default will be 1000. If you replace value by active, you can also configure if this feature is enabled or not. This is useful when you don’t want to use it at all. Learn how to implement chunking/bulking in this document.
In some cases, you may need to split the output of a processor in two or more connections. A common example is to have "main" and "reject" output connections where part of the incoming data are passed to a specific bucket and processed later.
Talend Component Kit supports two types of output connections: Flow and Reject.
Flow is the main and standard output connection.
The Reject connection handles records rejected during the processing. A component can only have one reject connection, if any. Its name must be REJECT to be processed correctly in Talend applications.
You can also define the different output connections of your component in the Starter.
To define an output connection, you can use @Output as replacement of the returned value in the @ElementListener:
Alternatively, you can pass a string that represents the new branch:
Having multiple inputs is similar to having multiple outputs, except that an OutputEmitter wrapper is not needed:
@Input takes the input name as parameter. If no name is set, it defaults to the "main (default)" input branch. It is recommended to use the default branch when possible and to avoid naming branches according to the component semantic.
Batch processing refers to the way execution environments process batches of data handled by a component using a grouping mechanism.
By default, the execution environment of a component automatically decides how to process groups of records and estimates an optimal group size depending on the system capacity. With this default behavior, the size of each group could sometimes be optimized for the system to handle the load more effectively or to match business requirements.
For example, real-time or near real-time processing needs often imply processing smaller batches of data, but more often. On the other hand, a one-time processing without business contraints is more effectively handled with a batch size based on the system capacity.
Final users of a component developed with the Talend Component Kit that integrates the batch processing logic described in this document can override this automatic size. To do that, a maxBatchSize option is available in the component settings and allows to set the maximum size of each group of data to process.
A component processes batch data as follows:
Case 1 - No maxBatchSize is specified in the component configuration. The execution environment estimates a group size of 4. Records are processed by groups of 4.
Case 2 - The runtime estimates a group size of 4 but a maxBatchSize of 3 is specified in the component configuration. The system adapts the group size to 3. Records are processed by groups of 3.
Batch processing relies on the sequence of three methods: @BeforeGroup, @ElementListener, @AfterGroup, that you can customize to your needs as a component Developer.
The group size automatic estimation logic is automatically implemented when a component is deployed to a Talend application.
Each group is processed as follows until there is no record left:
The @BeforeGroup method resets a record buffer at the beginning of each group.
The records of the group are assessed one by one and placed in the buffer as follows: The @ElementListener method tests if the buffer size is greater or equal to the defined maxBatchSize. If it is, the records are processed. If not, then the current record is buffered.
The previous step happens for all records of the group. Then the @AfterGroup method tests if the buffer is empty.
You can define the following logic in the processor configuration:
You can also use the condensed syntax for this kind of processor:
When writing tests for components, you can force the maxBatchSize parameter value by setting it with the following syntax:
Processors and output components are the components in charge of reading, processing and transforming data in a Talend job, as well as passing it to its required destination.
Before implementing the component logic and defining its layout and configurable fields, make sure you have specified its basic metadata, as detailed in this document.
A Processor is a component that converts incoming data to a different model.
A processor must have a method decorated with @ElementListener taking an incoming data and returning the processed data:
Processors must be Serializable because they are distributed components.
If you just need to access data on a map-based ruleset, you can use Record or JsonObject as parameter type. From there, Talend Component Kit wraps the data to allow you to access it as a map. The parameter type is not enforced. This means that if you know you will get a SuperCustomDto, then you can use it as parameter type. But for generic components that are reusable in any chain, it is highly encouraged to use Record until you have an evaluation language-based processor that has its own way to access components.
For example:
A processor also supports @BeforeGroup and @AfterGroup methods, which must not have any parameter and return void values. Any other result would be ignored. These methods are used by the runtime to mark a chunk of the data in a way which is estimated good for the execution flow size.
Because the size is estimated, the size of a group can vary. It is even possible to have groups of size 1.
It is recommended to batch records, for performance reasons:
You can optimize the data batch processing by using the maxBatchSize parameter. This parameter is automatically implemented on the component when it is deployed to a Talend application. Only the logic needs to be implemented. You can however customize its value setting in your LocalConfiguration the property _maxBatchSize.value - for the family - or ${component simple class name}._maxBatchSize.value - for a particular component, otherwise its default will be 1000. If you replace value by active, you can also configure if this feature is enabled or not. This is useful when you don’t want to use it at all. Learn how to implement chunking/bulking in this document.
In some cases, you may need to split the output of a processor in two or more connections. A common example is to have "main" and "reject" output connections where part of the incoming data are passed to a specific bucket and processed later.
Talend Component Kit supports two types of output connections: Flow and Reject.
Flow is the main and standard output connection.
The Reject connection handles records rejected during the processing. A component can only have one reject connection, if any. Its name must be REJECT to be processed correctly in Talend applications.
You can also define the different output connections of your component in the Starter.
To define an output connection, you can use @Output as replacement of the returned value in the @ElementListener:
Alternatively, you can pass a string that represents the new branch:
Having multiple inputs is similar to having multiple outputs, except that an OutputEmitter wrapper is not needed:
@Input takes the input name as parameter. If no name is set, it defaults to the "main (default)" input branch. It is recommended to use the default branch when possible and to avoid naming branches according to the component semantic.
Batch processing refers to the way execution environments process batches of data handled by a component using a grouping mechanism.
By default, the execution environment of a component automatically decides how to process groups of records and estimates an optimal group size depending on the system capacity. With this default behavior, the size of each group could sometimes be optimized for the system to handle the load more effectively or to match business requirements.
For example, real-time or near real-time processing needs often imply processing smaller batches of data, but more often. On the other hand, a one-time processing without business contraints is more effectively handled with a batch size based on the system capacity.
Final users of a component developed with the Talend Component Kit that integrates the batch processing logic described in this document can override this automatic size. To do that, a maxBatchSize option is available in the component settings and allows to set the maximum size of each group of data to process.
A component processes batch data as follows:
Case 1 - No maxBatchSize is specified in the component configuration. The execution environment estimates a group size of 4. Records are processed by groups of 4.
Case 2 - The runtime estimates a group size of 4 but a maxBatchSize of 3 is specified in the component configuration. The system adapts the group size to 3. Records are processed by groups of 3.
Batch processing relies on the sequence of three methods: @BeforeGroup, @ElementListener, @AfterGroup, that you can customize to your needs as a component Developer.
The group size automatic estimation logic is automatically implemented when a component is deployed to a Talend application.
Each group is processed as follows until there is no record left:
The @BeforeGroup method resets a record buffer at the beginning of each group.
The records of the group are assessed one by one and placed in the buffer as follows: The @ElementListener method tests if the buffer size is greater or equal to the defined maxBatchSize. If it is, the records are processed. If not, then the current record is buffered.
The previous step happens for all records of the group. Then the @AfterGroup method tests if the buffer is empty.
You can define the following logic in the processor configuration:
You can also use the condensed syntax for this kind of processor:
When writing tests for components, you can force the maxBatchSize parameter value by setting it with the following syntax:
Batch processing refers to the way execution environments process batches of data handled by a component using a grouping mechanism.
By default, the execution environment of a component automatically decides how to process groups of records and estimates an optimal group size depending on the system capacity. With this default behavior, the size of each group could sometimes be optimized for the system to handle the load more effectively or to match business requirements.
For example, real-time or near real-time processing needs often imply processing smaller batches of data, but more often. On the other hand, a one-time processing without business contraints is more effectively handled with a batch size based on the system capacity.
Final users of a component developed with the Talend Component Kit that integrates the batch processing logic described in this document can override this automatic size. To do that, a maxBatchSize option is available in the component settings and allows to set the maximum size of each group of data to process.
A component processes batch data as follows:
Case 1 - No maxBatchSize is specified in the component configuration. The execution environment estimates a group size of 4. Records are processed by groups of 4.
Case 2 - The runtime estimates a group size of 4 but a maxBatchSize of 3 is specified in the component configuration. The system adapts the group size to 3. Records are processed by groups of 3.
Batch processing relies on the sequence of three methods: @BeforeGroup, @ElementListener, @AfterGroup, that you can customize to your needs as a component Developer.
The group size automatic estimation logic is automatically implemented when a component is deployed to a Talend application.
Each group is processed as follows until there is no record left:
The @BeforeGroup method resets a record buffer at the beginning of each group.
The records of the group are assessed one by one and placed in the buffer as follows: The @ElementListener method tests if the buffer size is greater or equal to the defined maxBatchSize. If it is, the records are processed. If not, then the current record is buffered.
The previous step happens for all records of the group. Then the @AfterGroup method tests if the buffer is empty.
You can define the following logic in the processor configuration:
You can also use the condensed syntax for this kind of processor:
When writing tests for components, you can force the maxBatchSize parameter value by setting it with the following syntax:
In some cases you can need to add some actions that are not related to the runtime. For example, enabling users of the plugin/library to test if a connection works properly. To do so, you need to define an @Action, which is a method with a name (representing the event name), in a class decorated with @Service: Services are singleton. If you need some thread safety, make sure that they match that requirement. Services should not store any status either because they can be serialized at any time. Status are held by the component. Services can be used in components as well (matched by type). They allow to reuse some shared logic, like a client. Here is a sample with a service used to access files: The service is automatically passed to the constructor. It can be used as a bean. In that case, it is only necessary to call the service method. Some common actions need a clear contract so they are defined as API first-class citizen. For example, this is the case for wizards or health checks. Here is the list of the available actions: Mark an action works for closing runtime connection, returning a close helper object which do real close action. The functionality is for the Studio only, studio will use the close object to close connection for existed connection, and no effect for cloud platform. Type: close_connection API: @org.talend.sdk.component.api.service.connection.CloseConnection Returned type: org.talend.sdk.component.api.service.connection.CloseConnectionObject Sample: Mark an action works for creating runtime connection, returning a runtime connection object like jdbc connection if database family. Its parameter MUST be a datastore. Datastore is configuration type annotated with @DataStore. The functionality is for the Studio only, studio will use the runtime connection object when use existed connection, and no effect for cloud platform. Type: create_connection API: @org.talend.sdk.component.api.service.connection.CreateConnection This class marks an action that explore a connection to retrieve potential datasets. Type: discoverdataset API: @org.talend.sdk.component.api.service.discovery.DiscoverDataset Returned type: org.talend.sdk.component.api.service.discovery.DiscoverDatasetResult Sample: Mark a method as being useful to fill potential values of a string option for a property denoted by its value. You can link a field as being completable using @Proposable(value). The resolution of the completion action is then done through the component family and value of the action. The callback doesn’t take any parameter. Type: dynamic_values API: @org.talend.sdk.component.api.service.completion.DynamicValues Returned type: org.talend.sdk.component.api.service.completion.Values Sample: This class marks an action doing a connection test Type: healthcheck API: @org.talend.sdk.component.api.service.healthcheck.HealthCheck Returned type: org.talend.sdk.component.api.service.healthcheck.HealthCheckStatus Sample: Mark an action as returning a discovered schema. Its parameter MUST be a dataset. Dataset is configuration type annotated with @DataSet. If component has multiple datasets, then dataset used as action parameter should have the same identifier as this @DiscoverSchema. Type: schema API: @org.talend.sdk.component.api.service.schema.DiscoverSchema Returned type: org.talend.sdk.component.api.record.Schema Sample: Mark a method as returning a Schema resulting from a connector configuration and some other parameters.Parameters can be an incoming schema and/or an outgoing branch.`value' name should match the connector’s name. Type: schema_extended API: @org.talend.sdk.component.api.service.schema.DiscoverSchemaExtended Returned type: org.talend.sdk.component.api.record.Schema Sample: Mark a method as being useful to fill potential values of a string option. You can link a field as being completable using @Suggestable(value). The resolution of the completion action is then done when the user requests it (generally by clicking on a button or entering the field depending the environment). Type: suggestions API: @org.talend.sdk.component.api.service.completion.Suggestions Returned type: org.talend.sdk.component.api.service.completion.SuggestionValues Sample: This class marks an action returning a new instance replacing part of a form/configuration. Type: update API: @org.talend.sdk.component.api.service.update.Update Extension point for custom UI integrations and custom actions. Type: user API: @org.talend.sdk.component.api.service.Action Mark a method as being used to validate a configuration. this is a server validation so only use it if you can’t use other client side validation to implement it. Type: validation API: @org.talend.sdk.component.api.service.asyncvalidation.AsyncValidation Returned type: org.talend.sdk.component.api.service.asyncvalidation.ValidationResult Sample: These actions are provided - or not - by the application the UI runs within. always ensure you don’t require this action in your component. Mark the decorated field as supporting suggestions, i.e. dynamically get a list of valid values the user can use. It is however different from @Suggestable by looking up the implementation in the current application and not the services. Finally, it is important to note that it can do nothing in some environments too and that there is no guarantee the specified action is supported. API: @org.talend.sdk.component.api.configuration.action.BuiltInSuggestable Internationalization is supported through the injection of the $lang parameter, which allows you to get the correct locale to use with an @Internationalized service: You can combine the $lang option with the @Internationalized and @Language parameters.