Getting started with Talend Component Kit
Talend Component Kit is a Java framework designed to simplify the development of components at two levels:
-
The Runtime, that injects the specific component code into a job or pipeline. The framework helps unifying as much as possible the code required to run in Data Integration (DI) and BEAM environments.
-
The Graphical interface. The framework helps unifying the code required to render the component in a browser or in the Eclipse-based Talend Studio (SWT).
Most part of the development happens as a Maven or Gradle project and requires a dedicated tool such as IntelliJ.
The Component Kit is made of:
-
A Starter, that is a graphical interface allowing you to define the skeleton of your development project.
-
APIs to implement components UI and runtime.
-
Development tools: Maven and Gradle wrappers, validation rules, packaging, Web preview, etc.
-
A testing kit based on JUnit 4 and 5.
By using this tooling in a development environment, you can start creating components as described below.
Talend Component Kit methodology
Developing new components using the Component Kit framework includes:
-
Creating a project using the starter or the Talend IntelliJ plugin. This step allows to build the skeleton of the project. It consists in:
-
Defining the general configuration model for each component in your project.
-
Generating and downloading the project archive from the starter.
-
Compiling the project.
-
-
Importing the compiled project in your IDE. This step is not required if you have generated the project using the IntelliJ plugin.
-
Implementing the components, including:
-
Registering the components by specifying their metadata: family, categories, version, icon, type and name.
-
Defining the layout and configurable part of the components.
-
Defining the execution logic of the components, also called runtime.
-
-
Deploying the components to Talend Studio or Cloud applications.
Optionally, you can use services. Services are predefined or user-defined configurations that can be reused in several components.
Component types
There are four types of components, each type coming with its specificities, especially on the runtime side.
-
Input components: Retrieve the data to process from a defined source. An input component is made of:
-
The execution logic of the component, represented by a
Mapper
or anEmitter
class. -
The source logic of the component, represented by a
Source
class. -
The layout of the component and the configuration that the end-user will need to provide when using the component, defined by a
Configuration
class. All input components must have a dataset specified in their configuration, and every dataset must use a datastore.
-
-
Processors: Process and transform the data. A processor is made of:
-
The execution logic of the component, describing how to process each records or batches of records it receives. It also describes how to pass records to its output connections. This logic is defined in a
Processor
class. -
The layout of the component and the configuration that the end-user will need to provide when using the component, defined by a
Configuration
class.
-
-
Output components: Send the processed data to a defined destination. An output component is made of:
-
The execution logic of the component, describing how to process each records or batches of records it receives. This logic is defined in an
Output
class. Unlike processors, output components are the last components of the execution and return no data. -
The layout of the component and the configuration that the end-user will need to provide when using the component, defined by a
Configuration
class. All input components must have a dataset specified in their configuration, and every dataset must use a datastore.
-
-
Standalone components: Make a call to the service or run a query on the database. A standalone component is made of:
-
The execution logic of the component, represented by a
DriverRunner
class. -
The layout of the component and the configuration that the end-user will need to provide when using the component, defined by a
Configuration
class. All input components must have a datastore or dataset specified in their configuration, and every dataset must use a datastore.
-
The following example shows the different classes of an input components in a multi-component development project:
Creating your first component
This tutorial walks you through the most common iteration steps to create a component with Talend Component Kit and to deploy it to Talend Open Studio.
The component created in this tutorial is a simple processor that reads data coming from the previous component in a job or pipeline and displays it in the console logs of the application, along with an additional information entered by the final user.
The component designed in this tutorial is a processor and does not require nor show any datastore and dataset configuration. Datasets and datastores are required only for input and output components. |
Prerequisites
To get your development environment ready and be able to follow this tutorial:
-
Download and install a Java JDK 1.8 or greater.
-
Download and install Talend Open Studio. For example, from Sourceforge.
-
Download and install IntelliJ.
-
Download the Talend Component Kit plugin for IntelliJ. The detailed installation steps for the plugin are available in this document.
Generate a component project
The first step in this tutorial is to generate a component skeleton using the Starter embedded in the Talend Component Kit plugin for IntelliJ.
-
Start IntelliJ and create a new project. In the available options, you should see Talend Component.
-
Make sure that a Project SDK is selected. Then, select Talend Component and click Next.
The Talend Component Kit Starter opens. -
Enter the component and project metadata. Change the default values, for example as presented in the screenshot below:
-
The Component Family and the Category will be used later in Talend Open Studio to find the new component.
-
Project metadata is mostly used to identify the project structure. A common practice is to replace 'company' in the default value by a value of your own, like your domain name.
-
-
Once the metadata is filled, select Add a component. A new screen is displayed in the Talend Component Kit Starter that lets you define the generic configuration of the component. By default, new components are processors.
-
Enter a valid Java name for the component. For example, Logger.
-
Select Configuration Model and add a string type field named
level
. This input field will be used in the component configuration for final users to enter additional information to display in the logs. -
In the Input(s) / Output(s) section, click the default MAIN input branch to access its detail, and make sure that the record model is set to Generic. Leave the Name of the branch with its default
MAIN
value. -
Repeat the same step for the default MAIN output branch.
Because the component is a processor, it has an output branch by default. A processor without any output branch is considered an output component. You can create output components when the Activate IO option is selected. -
Click Next and check the name and location of the project, then click Finish to generate the project in the IDE.
At this point, your component is technically already ready to be compiled and deployed to Talend Open Studio. But first, take a look at the generated project:
-
Two classes based on the name and type of component defined in the Talend Component Kit Starter have been generated:
-
LoggerProcessor is where the component logic is defined
-
LoggerProcessorConfiguration is where the component layout and configurable fields are defined, including the level string field that was defined earlier in the configuration model of the component.
-
-
The package-info.java file contains the component metadata defined in the Talend Component Kit Starter, such as family and category.
-
You can notice as well that the elements in the tree structure are named after the project metadata defined in the Talend Component Kit Starter.
These files are the starting point if you later need to edit the configuration, logic, and metadata of the component.
There is more that you can do and configure with the Talend Component Kit Starter. This tutorial covers only the basics. You can find more information in this document.
Compile and deploy the component to Talend Open Studio
Without modifying the component code generated from the Starter, you can compile the project and deploy the component to a local instance of Talend Open Studio.
The logic of the component is not yet implemented at that stage. Only the configurable part specified in the Starter will be visible. This step is useful to confirm that the basic configuration of the component renders correctly.
Before starting to run any command, make sure that Talend Open Studio is not running.
-
From the component project in IntelliJ, open a Terminal and make sure that the selected directory is the root of the project. All commands shown in this tutorial are performed from this location.
-
Compile the project by running the following command:
mvnw clean install
.
Themvnw
command refers to the Maven wrapper that is embedded in Talend Component Kit. It allows to use the right version of Maven for your project without having to install it manually beforehand. An equivalent wrapper is available for Gradle. -
Once the command is executed and you see BUILD SUCCESS in the terminal, deploy the component to your local instance of Talend Open Studio using the following command:
mvnw talend-component:deploy-in-studio -Dtalend.component.studioHome="<path to Talend Open Studio home>"
.Replace the path with your own value. If the path contains spaces (for example, Program Files
), enclose it with double quotes. -
Make sure the build is successful.
-
Open Talend Open Studio and create a new Job:
At this point, the new component is available in Talend Open Studio, and its configurable part is already set. But the component logic is still to be defined.
Edit the component
You can now edit the component to implement its logic: reading the data coming through the input branch to display that data in the execution logs of the job. The value of the level field that final users can fill also needs to be changed to uppercase and displayed in the logs.
-
Save the job created earlier and close Talend Open Studio.
-
Go back to the component development project in IntelliJ and open the LoggerProcessor class. This is the class where the component logic can be defined.
-
Look for the
@ElementListener
method. It is already present and references the default input branch that was defined in the Talend Component Kit Starter, but it is not complete yet. -
To be able to log the data in input to the console, add the following lines:
//Log read input to the console with uppercase level. System.out.println("["+configuration.getLevel().toUpperCase()+"] "+defaultInput);
The
@ElementListener
method now looks as follows:@ElementListener public void onNext( @Input final Record defaultInput) { //Reads the input. //Log read input to the console with uppercase level. System.out.println("["+configuration.getLevel().toUpperCase()+"] "+defaultInput); }
-
Open a Terminal again to compile the project and deploy the component again. To do that, run successively the two following commands:
-
mvnw clean install
-
`mvnw talend-component:deploy-in-studio -Dtalend.component.studioHome="<path to Talend Open Studio home>"
-
The update of the component logic should now be deployed. After restarting Talend Open Studio, you will be ready to build a job and use the component for the first time.
To learn the different possibilities and methods available to develop more complex logics, refer to this document.
If you want to avoid having to close and re-open Talend Open Studio every time you need to make an edit, you can enable the developer mode, as explained in this document.
Build a job with the component
As the component is now ready to be used, it is time to create a job and check that it behaves as intended.
-
Open Talend Open Studio again and go to the job created earlier. The new component is still there.
-
Add a tRowGenerator component and connect it to the logger.
-
Double-click the tRowGenerator to specify the data to generate:
-
Validate the tRowGenerator configuration.
-
Open the TutorialFamilyLogger component and set the level field to
info
. -
Go to the Run tab of the job and run the job.
The job is executed. You can observe in the console that each of the 10 generated rows is logged, and that theinfo
value entered in the logger is also displayed with each record, in uppercase.
Record types
Components are designed to manipulate data (access, read, create). Talend Component Kit can handle several types of data, described in this document.
By design, the framework must run in DI (plain standalone Java program) and in Beam pipelines.
It is out of scope of the framework to handle the way the runtime serializes - if needed - the data.
For that reason, it is critical not to import serialization constraints to the stack. As an example, this is one of the reasons why Record
or JsonObject
were preferred to Avro IndexedRecord
.
Any serialization concern should either be hidden in the framework runtime (outside of the component developer scope) or in the runtime integration with the framework (for example, Beam integration).
Record
Record
is the default format. It offers many possibilities and can evolve depending on the Talend platform needs. Its structure is data-driven and exposes a schema that allows to browse it.
Projects generated from the Talend Component Kit Starter are by default designed to handle this format of data.
Record is a Java interface but never implement it yourself to ensure compatibility with the different Talend products. Follow the guidelines below.
|
Creating a record
You can build records using the newRecordBuilder
method of the RecordBuilderFactory
(see here).
For example:
public Record createRecord() {
return factory.newRecordBuilder()
.withString("name", "Gary")
.withDateTime("date", ZonedDateTime.of(LocalDateTime.of(2011, 2, 6, 8, 0), ZoneId.of("UTC")))
.build();
}
In the example above, the schema is dynamically computed from the data. You can also do it using a pre-built schema, as follows:
public Record createRecord() {
return factory.newRecordBuilder(myAlreadyBuiltSchemaWithSchemaBuilder)
.withString("name", "Gary")
.withDateTime("date", ZonedDateTime.of(LocalDateTime.of(2011, 2, 6, 8, 0), ZoneId.of("UTC")))
.build();
}
The example above uses a schema that was pre-built using factory.newSchemaBuilder(Schema.Type.RECORD)
.
When using a pre-built schema, the entries passed to the record builder are validated. It means that if you pass a null value null or an entry type that does not match the provided schema, the record creation fails. It also fails if you try to add an entry which does not exist or if you did not set a not nullable entry.
Using a dynamic schema can be useful on the backend but can lead users to more issues when creating a pipeline to process the data. Using a pre-built schema is more reliable for end-users. |
Accessing and reading a record
You can access and read data by relying on the getSchema
method, which provides you with the available entries (columns) of a record. The Entry
exposes the type of its value, which lets you access the value through the corresponding method. For example, the Schema.Type.STRING
type implies using the getString
method of the record.
For example:
public void print(final Record record) {
final Schema schema = record.getSchema();
// log in the natural type
schema.getEntries()
.forEach(entry -> System.out.println(record.get(Object.class, entry.getName())));
// log only strings
schema.getEntries().stream()
.filter(e -> e.getType() == Schema.Type.STRING)
.forEach(entry -> System.out.println(record.getString(entry.getName())));
}
Supported data types
The Record
format supports the following data types:
-
String
-
Boolean
-
Int
-
Long
-
Float
-
Double
-
DateTime
-
Array
-
Bytes
-
Record
A map can always be modelized as a list (array of records with key and value entries). |
For example:
public Record create() {
final Record address = factory.newRecordBuilder()
.withString("street", "Prairie aux Ducs")
.withString("city", "Nantes")
.withString("country", "FRANCE")
.build();
return factory.newRecordBuilder()
.withBoolean("active", true)
.withInt("age", 33)
.withLong("duration", 123459)
.withFloat("tolerance", 1.1f)
.withDouble("balance", 12.58)
.withString("name", "John Doe")
.withDateTime("birth", ZonedDateTime.now())
.withRecord(
factory.newEntryBuilder()
.withName("address")
.withType(Schema.Type.RECORD)
.withComment("The user address")
.withElementSchema(address.getSchema())
.build(),
address)
.withArray(
factory.newEntryBuilder()
.withName("permissions")
.withType(Schema.Type.ARRAY)
.withElementSchema(factory.newSchemaBuilder(Schema.Type.STRING).build())
.build(),
asList("admin", "dev"))
.build();
}
Example: discovering a schema
For example, you can use the API to provide the schema. The following method needs to be implemented in a service.
Manually constructing the schema without any data:
@DiscoverSchema
getSchema(@Option MyDataset dataset) {
return factory.newSchemaBuilder(Schema.Type.RECORD)
.withEntry(factory.newEntryBuilder().withName("id").withType(Schema.Type.LONG).build())
.withEntry(factory.newEntryBuilder().withName("name").withType(Schema.Type.STRING).build())
.build();
}
Returning the schema from an already built record:
@DiscoverSchema
public Schema guessSchema(@Option MyDataset dataset, final MyDataLoaderService myCustomService) {
return myCustomService.loadFirstData().getRecord().getSchema();
}
MyDataset is the class that defines the dataset. Learn more about datasets and datastores in this document.
|
Authorized characters in entry names
Entry names for Record
and JsonObject
types must comply with the following rules:
-
The name must start with a letter or with
_
. If not, the invalid characters are ignored until the first valid character. -
Following characters of the name must be a letter, a number, or
. If not, the invalid character is replaced with
.
For example:
-
1foo
becomesfoo
. -
f@o
becomesf_o
. -
1234f5@o
becomes___f5_o
. -
foo123
staysfoo123
.
JsonObject
The runtime also supports JsonObject
as input and output component type. You can rely on the JSON services (Jsonb
, JsonBuilderFactory
) to create new instances.
This format is close to the Record
format, except that it does not natively support the Datetime type and has a unique Number type to represent Int, Long, Float and Double types. It also does not provide entry metadata like nullable
or comment
, for example.
It also inherits the Record
format limitations.
Setting up your environment
Before being able to develop components using Talend Component Kit, you need the right system configuration and tools.
Although Talend Component Kit comes with some embedded tools, such as Maven and Gradle wrappers, you still need to prepare your system. A Talend Component Kit plugin for IntelliJ is also available and allows to design and generate your component project right from IntelliJ.
System prerequisites
In order to use Talend Component Kit, you need the following tools installed on your machine:
-
Java JDK 1.8.x. You can download it from Oracle website.
-
Talend Open Studio to integrate your components.
-
A Java Integrated Development Environment such as Eclipse or IntelliJ. IntelliJ is recommended as a Talend Component Kit plugin is available.
-
Optional: If you use IntelliJ, you can install the Talend Component Kit plugin for IntelliJ.
-
Optional: A build tool:
-
Apache Maven 3.5.4 is recommended to develop a component or the project itself. You can download it from Apache Maven website.
-
You can also use Gradle, but at the moment certain features are not supported, such as validations.
It is optional to install a build tool independently since Maven and Gradle wrappers are already available with Talend Component Kit.
-
Installing the Talend Component Kit IntelliJ plugin
The Talend Component Kit IntelliJ plugin is a plugin for the IntelliJ Java IDE. It adds support for the Talend Component Kit project creation.
Main features:
-
Project generation support.
-
Internationalization completion for component configuration.
Installing the IntelliJ plugin
In the Intellij IDEA:
-
Go to File > Settings…
-
On the left panel, select Plugins.
-
Access the Marketplace tab.
-
Enter
Talend
in the search field and Select Talend Component Kit. -
Select Install.
-
Click the Restart IDE button.
-
Confirm the IDEA restart to complete the installation.
The plugin is now installed on your IntelliJ IDEA. You can start using it.
About the internationalization completion
The plugin offers auto-completion for the configuration internationalization. The Talend component configuration lets you setup translatable and user-friendly labels for your configuration using a property file. Auto-completion in possible for the configuration keys and default values in the property file.
For example, you can internationalize a simple configuration class for a basic authentication that you use in your component:
@Checkable("basicAuth")
@DataStore("basicAuth")
@GridLayout({
@GridLayout.Row({ "url" }),
@GridLayout.Row({ "username", "password" }),
})
public class BasicAuthConfig implements Serializable {
@Option
private String url;
@Option
private String username;
@Option
@Credential
private String password;
}
This configuration class contains three properties which you can attach a user-friendly label to.
For example, you can define a label like My server URL
for the url
option:
-
Locate or create a
Messages.properties
file in the project resources and add the label to that file. The plugin automatically detects your configuration and provides you with key completion in the property file. -
Press Ctrl+Space to see the key suggestions.
Generating a project
The first step when developing new components is to create a project that will contain the skeleton of your components and set you on the right track.
The project generation can be achieved using the Talend Component Kit Starter or the Talend Component Kit plugin for IntelliJ.
Through a user-friendly interface, you can define the main lines of your project and of your component(s), including their name, family, type, configuration model, and so on.
Once completed, all the information filled are used to generate a project that you will use as starting point to implement the logic and layout of your components, and to iterate on them.
Once your project is generated, you can start implementing the component logic.
Generating a project using the Component Kit Starter
The Component Kit Starter lets you design your components configuration and generates a ready-to-implement project structure.
The Starter is available on the web or as an IntelliJ plugin.
This tutorial shows you how to use the Component Kit Starter to generate new components for MySQL databases. Before starting, make sure that you have correctly setup your environment. See this section.
When defining a project using the Starter, do not refresh the page to avoid losing your configuration. |
Configuring the project
Before being able to create components, you need to define the general settings of the project:
-
Create a folder on your local machine to store the resource files of the component you want to create. For example,
C:/my_components
. -
Open the Starter in the web browser of your choice.
-
Select your build tool. This tutorial uses Maven, but you can select Gradle instead.
-
Add any facet you need. For example, add the Talend Component Kit Testing facet to your project to automatically generate unit tests for the components created in the project.
-
Enter the Component Family of the components you want to develop in the project. This name must be a valid java name and is recommended to be capitalized, for example 'MySQL'.
Once you have implemented your components in the Studio, this name is displayed in the Palette to group all of the MySQL-related components you develop, and is also part of your component name. -
Select the Category of the components you want to create in the current project. As MySQL is a kind of database, select Databases in this tutorial.
This Databases category is used and displayed as the parent family of the MySQL group in the Palette of the Studio. -
Complete the project metadata by entering the Group, Artifact and Package.
-
By default, you can only create processors. If you need to create Input or Output components, select Activate IO. By doing this:
-
Two new menu entries let you add datasets and datastores to your project, as they are required for input and output components.
Input and Output components without dataset (itself containing a datastore) will not pass the validation step when building the components. Learn more about datasets and datastores in this document. -
An Input component and an Output component are automatically added to your project and ready to be configured.
-
Components added to the project using Add A Component can now be processors, input or output components.
-
Defining a Datastore
A datastore represents the data needed by an input or output component to connect to a database.
When building a component, the validateDataSet
validation checks that each input or output (processor without output branch) component uses a dataset and that this dataset has a datastore.
You can define one or several datastores if you have selected the Activate IO step.
-
Select Datastore. The list of datastores opens. By default, a datastore is already open but not configured. You can configure it or create a new one using Add new Datastore.
-
Specify the name of the datastore. Modify the default value to a meaningful name for your project.
This name must be a valid Java name as it will represent the datastore class in your project. It is a good practice to start it with an uppercase letter. -
Edit the datastore configuration. Parameter names must be valid Java names. Use lower case as much as possible. A typical configuration includes connection details to a database:
-
url
-
username
-
password.
-
-
Save the datastore configuration.
Defining a Dataset
A dataset represents the data coming from or sent to a database and needed by input and output components to operate.
The validateDataSet
validation checks that each input or output (processor without output branch) component uses a dataset and that this dataset has a datastore.
You can define one or several datasets if you have selected the Activate IO step.
-
Select Dataset. The list of datasets opens. By default, a dataset is already open but not configured. You can configure it or create a new one using the Add new Dataset button.
-
Specify the name of the dataset. Modify the default value to a meaningful name for your project.
This name must be a valid Java name as it will represent the dataset class in your project. It is a good practice to start it with an uppercase letter. -
Edit the dataset configuration. Parameter names must be valid Java names. Use lower case as much as possible. A typical configuration includes details of the data to retrieve:
-
Datastore to use (that contains the connection details to the database)
-
table name
-
data
-
-
Save the dataset configuration.
Creating an Input component
To create an input component, make sure you have selected Activate IO.
When clicking Add A Component in the Starter, a new step allows you to define a new component in your project.
The intent in this tutorial is to create an input component that connects to a MySQL database, executes a SQL query and gets the result.
-
Choose the component type. Input in this case.
-
Enter the component name. For example, MySQLInput.
-
Click Configuration model. This button lets you specify the required configuration for the component. By default, a dataset is already specified.
-
For each parameter that you need to add, click the (+) button on the right panel. Enter the parameter name and choose its type then click the tick button to save the changes.
In this tutorial, to be able to execute a SQL query on the Input MySQL database, the configuration requires the following parameters:+ -
Specify whether the component issues a stream or not. In this tutorial, the MySQL input component created is an ordinary (non streaming) component. In this case, leave the Stream option disabled.
-
Select the Record Type generated by the component. In this tutorial, select Generic because the component is designed to generate records in the default
Record
format.
You can also select Custom to define a POJO that represents your records.
Your input component is now defined. You can add another component or generate and download your project.
Creating a Processor component
When clicking Add A Component in the Starter, a new step allows you to define a new component in your project. The intent in this tutorial is to create a simple processor component that receives a record, logs it and returns it at it is.
If you did not select Activate IO, all new components you add to the project are processors by default. If you selected Activate IO, you can choose the component type. In this case, to create a Processor component, you have to manually add at least one output. |
-
If required, choose the component type: Processor in this case.
-
Enter the component name. For example, RecordLogger, as the processor created in this tutorial logs the records.
-
Specify the Configuration Model of the component. In this tutorial, the component doesn’t need any specific configuration. Skip this step.
-
Define the Input(s) of the component. For each input that you need to define, click Add Input. In this tutorial, only one input is needed to receive the record to log.
-
Click the input name to access its configuration. You can change the name of the input and define its structure using a POJO. If you added several inputs, repeat this step for each one of them.
The input in this tutorial is a generic record. Enable the Generic option and click Save. -
Define the Output(s) of the component. For each output that you need to define, click Add Output. The first output must be named
MAIN
. In this tutorial, only one generic output is needed to return the received record.
Outputs can be configured the same way as inputs (see previous steps).
You can define a reject output connection by naming itREJECT
. This naming is used by Talend applications to automatically set the connection type to Reject.
Your processor component is now defined. You can add another component or generate and download your project.
Creating an Output component
To create an output component, make sure you have selected Activate IO.
When clicking Add A Component in the Starter, a new step allows you to define a new component in your project.
The intent in this tutorial is to create an output component that receives a record and inserts it into a MySQL database table.
Output components are Processors without any output. In other words, the output is a processor that does not produce any records. |
-
Choose the component type. Output in this case.
-
Enter the component name. For example, MySQLOutput.
-
Click Configuration Model. This button lets you specify the required configuration for the component. By default, a dataset is already specified.
-
For each parameter that you need to add, click the (+) button on the right panel. Enter the name and choose the type of the parameter, then click the tick button to save the changes.
In this tutorial, to be able to insert a record in the output MySQL database, the configuration requires the following parameters:+-
a dataset (which contains the datastore with the connection information)
-
a timeout parameter.
Closing the configuration panel on the right does not delete your configuration. However, refreshing the page resets the configuration.
-
-
Define the Input(s) of the component. For each input that you need to define, click Add Input. In this tutorial, only one input is needed.
-
Click the input name to access its configuration. You can change the name of the input and define its structure using a POJO. If you added several inputs, repeat this step for each one of them.
The input in this tutorial is a generic record. Enable the Generic option and click Save.
Do not create any output because the component does not produce any record. This is the only difference between an output an a processor component. |
Your output component is now defined. You can add another component or generate and download your project.
Generating and downloading the final project
Once your project is configured and all the components you need are created, you can generate and download the final project. In this tutorial, the project was configured and three components of different types (input, processor and output) have been defined.
-
Click Finish on the left panel. You are redirected to a page that summarizes the project. On the left panel, you can also see all the components that you added to the project.
-
Generate the project using one of the two options available:
-
Download it locally as a ZIP file using the Download as ZIP button.
-
Create a GitHub repository and push the project to it using the Create on Github button.
-
In this tutorial, the project is downloaded to the local machine as a ZIP file.
Compiling and exploring the generated project files
Once the package is available on your machine, you can compile it using the build tool selected when configuring the project.
-
In the tutorial, Maven is the build tool selected for the project.
In the project directory, execute themvn package
command.
If you don’t have Maven installed on your machine, you can use the Maven wrapper provided in the generated project, by executing the./mvnw package
command. -
If you have created a Gradle project, you can compile it using the
gradle build
command or using the Gradle wrapper:./gradlew build
.
The generated project code contains documentation that can guide and help you implementing the component logic. Import the project to your favorite IDE to start the implementation.
Generating a project using an OpenAPI JSON descriptor
The Component Kit Starter allows you to generate a component development project from an OpenAPI JSON descriptor.
-
Open the Starter in the web browser of your choice.
-
Enable the OpenAPI mode using the toggle in the header.
-
Go to the API menu.
-
Paste the OpenAPI JSON descriptor in the right part of the screen. All the described endpoints are detected.
-
Unselect the endpoints that you do not want to use in the future components. By default, all detected endpoints are selected.
-
Go to the Finish menu.
-
Download the project.
When exploring the project generated from an OpenAPI descriptor, you can notice the following elements:
-
sources
-
the API dataset
-
an HTTP client for the API
-
a connection folder containing the component configuration. By default, the configuration is only made of a simple datastore with a
baseUrl
parameter.
Generating a project using IntelliJ plugin
Once the plugin installed, you can generate a component project.
-
Select File > New > Project.
-
In the New Project wizard, choose Talend Component and click Next.
The plugin loads the component starter and lets you design your components. For more information about the Talend Component Kit starter, check this tutorial.
-
Once your project is configured, select Next, then click Finish.
The project is automatically imported into the IDEA using the build tool that you have chosen.
Implementing components
Once you have generated a project, you can start implementing the logic and layout of your components and iterate on it. Depending on the type of component you want to create, the logic implementation can differ. However, the layout and component metadata are defined the same way for all types of components in your project. The main steps are:
In some cases, you will require specific implementations to handle more advanced cases, such as:
You can also make certain configurations reusable across your project by defining services. Using your Java IDE along with a build tool supported by the framework, you can then compile your components to test and deploy them to Talend Studio or other Talend applications:
In any case, follow these best practices to ensure the components you develop are optimized.
You can also learn more about component loading and plugins here:
Registering components
Before implementing a component logic and configuration, you need to specify the family and the category it belongs to, the component type and name, as well as its name and a few other generic parameters. This set of metadata, and more particularly the family, categories and component type, is mandatory to recognize and load the component to Talend Studio or Cloud applications.
Some of these parameters are handled at the project generation using the starter, but can still be accessed and updated later on.
Component family and categories
The family and category of a component is automatically written in the package-info.java
file of the component package, using the @Components
annotation. By default, these parameters are already configured in this file when you import your project in your IDE. Their value correspond to what was defined during the project definition with the starter.
Multiple components can share the same family and category value, but the family + name pair must be unique for the system.
A component can belong to one family only and to one or several categories. If not specified, the category defaults to Misc
.
The package-info.java
file also defines the component family icon, which is different from the component icon. You can learn how to customize this icon in this section.
Here is a sample package-info.java
:
@Components(name = "my_component_family", categories = "My Category")
package org.talend.sdk.component.sample;
import org.talend.sdk.component.api.component.Components;
Another example with an existing component:
@Components(name = "Salesforce", categories = {"Business", "Cloud"})
package org.talend.sdk.component.sample;
import org.talend.sdk.component.api.component.Components;
Component icon and version
Components can require metadata to be integrated in Talend Studio or Cloud platforms.
Metadata is set on the component class and belongs to the org.talend.sdk.component.api.component
package.
When you generate your project and import it in your IDE, icon and version both come with a default value.
-
@Icon: Sets an icon key used to represent the component. You can use a custom key with the
custom()
method but the icon may not be rendered properly. The icon defaults to Check.
Replace it with a custom icon, as described in this section. -
@Version: Sets the component version. 1 by default.
Learn how to manage different versions and migrations between your component versions in this section.
For example:
@Version(1)
@Icon(FILE_XML_O)
@PartitionMapper(name = "jaxbInput")
public class JaxbPartitionMapper implements Serializable {
// ...
}
Defining a custom icon for a component or component family
Every component family and component needs to have a representative icon.
You have to define a custom icon as follows:
-
For the component family the icon is defined in the
package-info.java
file. -
For the component itself, you need to declare the icon in the component class.
Custom icons must comply with the following requirements:
-
Icons must be stored in the
src/main/resources/icons
folder of the project. -
Icon file names need to match one of the following patterns:
IconName.svg
orIconName_icon32.png
. The latter will run in degraded mode in Talend Cloud. ReplaceIconName
by the name of your choice. -
Icons must be squared, even for the SVG format.
@Icon(value = Icon.IconType.CUSTOM, custom = "IconName")
Note that SVG icons are not supported by Talend Studio and can cause the deployment of the component to fail. If you aim at deploying a custom component to Talend Studio, specify PNG icons or use the Maven (or Gradle) Ultimately, you can also remove SVG parameters from the |
Component extra metadatas
For any purpose, you can also add user defined metadatas to your component with the @Metadatas
annotation.
Example:
@Processor(name = "my_processor")
@Metadatas({
@Metadata(key = "user::value0", value = "myValue0"),
@Metadata(key = "user::value1", value = "myValue1")
})
public class MyProcessor {
}
You can also use a SPI implementing org.talend.sdk.component.spi.component.ComponentMetadataEnricher
.
Defining datasets and datastores
Datasets and datastores are configuration types that define how and where to pull the data from. They are used at design time to create shared configurations that can be stored and used at runtime.
All connectors (input and output components) created using Talend Component Kit must reference a valid dataset. Each dataset must reference a datastore.
-
Datastore: The data you need to connect to the backend.
-
Dataset: A datastore coupled with the data you need to execute an action.
Make sure that:
These rules are enforced by the |
Defining a datastore
A datastore defines the information required to connect to a data source. For example, it can be made of:
-
a URL
-
a username
-
a password.
You can specify a datastore and its context of use (in which dataset, etc.) from the Component Kit Starter.
Make sure to modelize the data your components are designed to handle before defining datasets and datastores in the Component Kit Starter. |
Once you generate and import the project into an IDE, you can find datastores under a specific datastore
node.
Example of datastore:
package com.mycomponent.components.datastore;
@DataStore("DatastoreA") (1)
@GridLayout({ (2)
// The generated component layout will display one configuration entry per line.
// Customize it as much as needed.
@GridLayout.Row({ "apiurl" }),
@GridLayout.Row({ "username" }),
@GridLayout.Row({ "password" })
})
@Documentation("A Datastore made of an API URL, a username, and a password. The password is marked as Credential.") (3)
public class DatastoreA implements Serializable {
@Option
@Documentation("")
private String apiurl;
@Option
@Documentation("")
private String username;
@Option
@Credential
@Documentation("")
private String password;
public String getApiurl() {
return apiurl;
}
public DatastoreA setApiurl(String apiurl) {
this.apiurl = apiurl;
return this;
}
public String getUsername() {
return Username;
}
public DatastoreA setuUsername(String username) {
this.username = username;
return this;
}
public String getPassword() {
return password;
}
public DatastoreA setPassword(String password) {
this.password = password;
return this;
}
}
1 | Identifying the class as a datastore and naming it. |
2 | Defining the layout of the datastore configuration. |
3 | Defining each element of the configuration: a URL, a username, and a password. Note that the password is also marked as a credential. |
Defining a dataset
A dataset represents the inbound data. It is generally made of:
-
A datastore that defines the connection information needed to access the data.
-
A query.
You can specify a dataset and its context of use (in which input and output component it is used) from the Component Kit Starter.
Make sure to modelize the data your components are designed to handle before defining datasets and datastores in the Component Kit Starter. |
Once you generate and import the project into an IDE, you can find datasets under a specific dataset
node.
Example of dataset referencing the datastore shown above:
package com.datastorevalidation.components.dataset;
@DataSet("DatasetA") (1)
@GridLayout({
// The generated component layout will display one configuration entry per line.
// Customize it as much as needed.
@GridLayout.Row({ "datastore" })
})
@Documentation("A Dataset configuration containing a simple datastore") (2)
public class DatasetA implements Serializable {
@Option
@Documentation("Datastore")
private DatastoreA datastore;
public DatastoreA getDatastore() {
return datastore;
}
public DatasetA setDatastore(DatastoreA datastore) {
this.datastore = datastore;
return this;
}
}
1 | Identifying the class as a dataset and naming it. |
2 | Implementing the dataset and referencing DatastoreA as the datastore to use. |
Internationalizing datasets and datastores
The display name of each dataset and datastore must be referenced in the message.properties
file of the family package.
The key for dataset and datastore display names follows a defined pattern: ${family}.${configurationType}.${name}._displayName
. For example:
ComponentFamily.dataset.DatasetA._displayName=Dataset A
ComponentFamily.datastore.DatastoreA._displayName=Datastore A
These keys are automatically added for datasets and datastores defined from the Component Kit Starter.
Reusing datasets and datastores in Talend Studio
When deploying a component or set of components that include datasets and datastores to Talend Studio, a new node is created under Metadata. This node has the name of the component family that was deployed.
It allows users to create reusable configurations for datastores and datasets.
With predefined datasets and datastores, users can then quickly fill the component configuration in their jobs. They can do so by selecting Repository as Property Type and by browsing to the predefined dataset or datastore.
How to create a reusable connection in Studio
Studio will generate connection and close components auto for reusing connection function in input and output components, just need to do like this example:
@Service
public class SomeService {
@CreateConnection
public Object createConn(@Option("configuration") SomeDataStore dataStore) throws ComponentException {
Object connection = null;
//get conn object by dataStore
return conn;
}
@CloseConnection
public CloseConnectionObject closeConn() {
return new CloseConnectionObject() {
public boolean close() throws ComponentException {
Object connection = this.getConnection();
//do close action
return true;
}
};
}
}
Then the runtime mapper and processor only need to use @Connection to get the connection like this:
@Version(1)
@Icon(value = Icon.IconType.CUSTOM, custom = "SomeInput")
@PartitionMapper(name = "SomeInput")
@Documentation("the doc")
public class SomeInputMapper implements Serializable {
@Connection
SomeConnection conn;
}
How does the component server interact with datasets and datastores
The component server scans all configuration types and returns a configuration type index. This index can be used for the integration into the targeted platforms (Studio, web applications, and so on).
Dataset
Mark a model (complex object) as being a dataset.
-
API: @org.talend.sdk.component.api.configuration.type.DataSet
-
Sample:
{
"tcomp::configurationtype::name":"test",
"tcomp::configurationtype::type":"dataset"
}
Datastore
Mark a model (complex object) as being a datastore (connection to a backend).
-
API: @org.talend.sdk.component.api.configuration.type.DataStore
-
Sample:
{
"tcomp::configurationtype::name":"test",
"tcomp::configurationtype::type":"datastore"
}
DatasetDiscovery
Mark a model (complex object) as being a dataset discovery configuration.
-
API: @org.talend.sdk.component.api.configuration.type.DatasetDiscovery
-
Sample:
{
"tcomp::configurationtype::name":"test",
"tcomp::configurationtype::type":"datasetDiscovery"
}
The component family associated with a configuration type (datastore/dataset) is always the one related to the component using that configuration. |
The configuration type index is represented as a flat tree that contains all the configuration types, which themselves are represented as nodes and indexed by ID.
Every node can point to other nodes. This relation is represented as an array of edges that provides the child IDs.
As an illustration, a configuration type index for the example above can be defined as follows:
{nodes: {
"idForDstore": { datastore:"datastore data", edges:[id:"idForDset"] },
"idForDset": { dataset:"dataset data" }
}
}
Defining an input component logic
Input components are the components generally placed at the beginning of a Talend job. They are in charge of retrieving the data that will later be processed in the job.
An input component is primarily made of three distinct logics:
-
The execution logic of the component itself, defined through a partition mapper.
-
The configurable part of the component, defined through the mapper configuration.
-
The source logic defined through a producer.
Before implementing the component logic and defining its layout and configurable fields, make sure you have specified its basic metadata, as detailed in this document.
Defining a partition mapper
What is a partition mapper
A Partition Mapper (PartitionMapper
) is a component able to split itself to make the execution more efficient.
This concept is borrowed from big data and useful in this context only (BEAM
executions).
The idea is to divide the work before executing it in order to reduce the overall execution time.
The process is the following:
-
The size of the data you work on is estimated. This part can be heuristic and not very precise.
-
From that size, the execution engine (runner for Beam) requests the mapper to split itself in N mappers with a subset of the overall work.
-
The leaf (final) mapper is used as a
Producer
(actual reader) factory.
This kind of component must be Serializable to be distributable.
|
Implementing a partition mapper
A partition mapper requires three methods marked with specific annotations:
-
@Assessor
for the evaluating method -
@Split
for the dividing method -
@Emitter
for theProducer
factory
@Assessor
The Assessor method returns the estimated size of the data related to the component (depending its configuration).
It must return a Number
and must not take any parameter.
For example:
@Assessor
public long estimateDataSetByteSize() {
return ....;
}
@Split
The Split method returns a collection of partition mappers and can take optionally a @PartitionSize
long value as parameter, which is the requested size of the dataset per sub partition mapper.
For example:
@Split
public List<MyMapper> split(@PartitionSize final long desiredSize) {
return ....;
}
Defining the producer method
The Producer defines the source logic of an input component. It handles the interaction with a physical source and produces input data for the processing flow.
A producer must have a @Producer
method without any parameter. It is triggered by the @Emitter
method of the partition mapper and can return any data. It is defined in the <component_name>Source.java
file:
@Producer
public MyData produces() {
return ...;
}
Defining a processor or an output component logic
Processors and output components are the components in charge of reading, processing and transforming data in a Talend job, as well as passing it to its required destination.
Before implementing the component logic and defining its layout and configurable fields, make sure you have specified its basic metadata, as detailed in this document.
Defining a processor
What is a processor
A Processor is a component that converts incoming data to a different model.
A processor must have a method decorated with @ElementListener
taking an incoming data and returning the processed data:
@ElementListener
public MyNewData map(final MyData data) {
return ...;
}
Processors must be Serializable because they are distributed components.
If you just need to access data on a map-based ruleset, you can use Record
or JsonObject
as parameter type.
From there, Talend Component Kit wraps the data to allow you to access it as a map. The parameter type is not enforced.
This means that if you know you will get a SuperCustomDto
, then you can use it as parameter type. But for generic components that are reusable in any chain, it is highly encouraged to use Record
until you have an evaluation language-based processor that has its own way to access components.
For example:
@ElementListener
public MyNewData map(final Record incomingData) {
String name = incomingData.getString("name");
int name = incomingData.getInt("age");
return ...;
}
// equivalent to (using POJO subclassing)
public class Person {
private String age;
private int age;
// getters/setters
}
@ElementListener
public MyNewData map(final Person person) {
String name = person.getName();
int age = person.getAge();
return ...;
}
A processor also supports @BeforeGroup
and @AfterGroup
methods, which must not have any parameter and return void
values. Any other result would be ignored.
These methods are used by the runtime to mark a chunk of the data in a way which is estimated good for the execution flow size.
Because the size is estimated, the size of a group can vary. It is even possible to have groups of size 1 .
|
It is recommended to batch records, for performance reasons:
@BeforeGroup
public void initBatch() {
// ...
}
@AfterGroup
public void endBatch() {
// ...
}
You can optimize the data batch processing by using the maxBatchSize
parameter. This parameter is automatically implemented on the component when it is deployed to a Talend application. Only the logic needs to be implemented. You can however customize its value setting in your LocalConfiguration
the property _maxBatchSize.value
- for the family - or ${component simple class name}._maxBatchSize.value
- for a particular component, otherwise its default will be 1000
. If you replace value
by active
, you can also configure if this feature is enabled or not. This is useful when you don’t want to use it at all. Learn how to implement chunking/bulking in this document.
Defining output connections
In some cases, you may need to split the output of a processor in two or more connections. A common example is to have "main" and "reject" output connections where part of the incoming data are passed to a specific bucket and processed later.
Talend Component Kit supports two types of output connections: Flow and Reject.
-
Flow is the main and standard output connection.
-
The Reject connection handles records rejected during the processing. A component can only have one reject connection, if any. Its name must be
REJECT
to be processed correctly in Talend applications.
You can also define the different output connections of your component in the Starter. |
To define an output connection, you can use @Output
as replacement of the returned value in the @ElementListener
:
@ElementListener
public void map(final MyData data, @Output final OutputEmitter<MyNewData> output) {
output.emit(createNewData(data));
}
Alternatively, you can pass a string that represents the new branch:
@ElementListener
public void map(final MyData data,
@Output final OutputEmitter<MyNewData> main,
@Output("REJECT") final OutputEmitter<MyNewDataWithError> rejected) {
if (isRejected(data)) {
rejected.emit(createNewData(data));
} else {
main.emit(createNewData(data));
}
}
// or
@ElementListener
public MyNewData map(final MyData data,
@Output("REJECT") final OutputEmitter<MyNewDataWithError> rejected) {
if (isSuspicious(data)) {
rejected.emit(createNewData(data));
return createNewData(data); // in this case the processing continues but notifies another channel
}
return createNewData(data);
}
Defining multiple inputs
Having multiple inputs is similar to having multiple outputs, except that an OutputEmitter
wrapper is not needed:
@ElementListener
public MyNewData map(@Input final MyData data, @Input("input2") final MyData2 data2) {
return createNewData(data1, data2);
}
@Input
takes the input name as parameter. If no name is set, it defaults to the "main (default)" input branch. It is recommended to use the default branch when possible and to avoid naming branches according to the component semantic.
Implementing batch processing
What is batch processing
Batch processing refers to the way execution environments process batches of data handled by a component using a grouping mechanism.
By default, the execution environment of a component automatically decides how to process groups of records and estimates an optimal group size depending on the system capacity. With this default behavior, the size of each group could sometimes be optimized for the system to handle the load more effectively or to match business requirements.
For example, real-time or near real-time processing needs often imply processing smaller batches of data, but more often. On the other hand, a one-time processing without business contraints is more effectively handled with a batch size based on the system capacity.
Final users of a component developed with the Talend Component Kit that integrates the batch processing logic described in this document can override this automatic size. To do that, a maxBatchSize
option is available in the component settings and allows to set the maximum size of each group of data to process.
A component processes batch data as follows:
-
Case 1 - No
maxBatchSize
is specified in the component configuration. The execution environment estimates a group size of 4. Records are processed by groups of 4. -
Case 2 - The runtime estimates a group size of 4 but a
maxBatchSize
of 3 is specified in the component configuration. The system adapts the group size to 3. Records are processed by groups of 3.
Batch processing implementation logic
Batch processing relies on the sequence of three methods: @BeforeGroup
, @ElementListener
, @AfterGroup
, that you can customize to your needs as a component Developer.
The group size automatic estimation logic is automatically implemented when a component is deployed to a Talend application. |
Each group is processed as follows until there is no record left:
-
The
@BeforeGroup
method resets a record buffer at the beginning of each group. -
The records of the group are assessed one by one and placed in the buffer as follows: The
@ElementListener
method tests if the buffer size is greater or equal to the definedmaxBatchSize
. If it is, the records are processed. If not, then the current record is buffered. -
The previous step happens for all records of the group. Then the
@AfterGroup
method tests if the buffer is empty.
You can define the following logic in the processor configuration:
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collection;
import javax.json.JsonObject;
import org.talend.sdk.component.api.processor.AfterGroup;
import org.talend.sdk.component.api.processor.BeforeGroup;
import org.talend.sdk.component.api.processor.ElementListener;
import org.talend.sdk.component.api.processor.Processor;
@Processor(name = "BulkOutputDemo")
public class BulkProcessor implements Serializable {
private Collection<JsonObject> buffer;
@BeforeGroup
public void begin() {
buffer = new ArrayList<>();
}
@ElementListener
public void bufferize(final JsonObject object) {
buffer.add(object);
}
@AfterGroup
public void commit() {
// saves buffered records at once (bulk)
}
}
You can also use the condensed syntax for this kind of processor:
@Processor(name = "BulkOutputDemo")
public class BulkProcessor implements Serializable {
@AfterGroup
public void commit(final Collection<Record> records) {
// saves records
}
}
When writing tests for components, you can force the maxBatchSize parameter value by setting it with the following syntax: <configuration prefix>.$maxBatchSize=10 .
|
You can learn more about processors in this document.
Shortcut syntax for bulk output processors
For the case of output components (not emitting any data) using bulking you can pass the list of records to the after group method:
@Processor(name = "DocOutput")
public class DocOutput implements Serializable {
@AfterGroup
public void onCommit(final Collection<Record> records) {
// save records
}
}
Defining an output
What is an output
An Output is a Processor that does not return any data.
Conceptually, an output is a data listener. It matches the concept of processor. Being the last component of the execution chain or returning no data makes your processor an output component:
@ElementListener
public void store(final MyData data) {
// ...
}
Defining a standalone component logic
Standalone components are the components without input or output flows. They are designed to do actions without reading or processing any data. For example standalone components can be used to create indexes in databases.
Before implementing the component logic and defining its layout and configurable fields, make sure you have specified its basic metadata, as detailed in this document.
Defining component layout and configuration
The component configuration is defined in the <component_name>Configuration.java
file of the package. It consists in defining the configurable part of the component that will be displayed in the UI.
To do that, you can specify parameters. When you import the project in your IDE, the parameters that you have specified in the starter are already present.
All input and output components must reference a dataset in their configuration. Refer to Defining datasets and datastores. |
Parameter name
Components are configured using their constructor parameters. All parameters can be marked with the @Option
property, which lets you give a name to them.
For the name to be correct, you must follow these guidelines:
-
Use a valid Java name.
-
Do not include any
.
character in it. -
Do not start the name with a
$
. -
Defining a name is optional. If you don’t set a specific name, it defaults to the bytecode name. This can require you to compile with a
-parameter
flag to avoid ending up with names such as arg0, arg1, and so on.
Examples of option name:
Option name | Valid |
---|---|
myName |
|
my_name |
|
my.name |
|
$myName |
Parameter types
Parameter types can be primitives or complex objects with fields decorated with @Option
exactly like method parameters.
It is recommended to use simple models which can be serialized in order to ease serialized component implementations. |
For example:
class FileFormat implements Serializable {
@Option("type")
private FileType type = FileType.CSV;
@Option("max-records")
private int maxRecords = 1024;
}
@PartitionMapper(name = "file-reader")
public MyFileReader(@Option("file-path") final File file,
@Option("file-format") final FileFormat format) {
// ...
}
Using this kind of API makes the configuration extensible and component-oriented, which allows you to define all you need.
The instantiation of the parameters is done from the properties passed to the component.
Primitives
A primitive is a class which can be directly converted from a String
to the expected type.
It includes all Java primitives, like the String
type itself, but also all types with a org.apache.xbean.propertyeditor.Converter
:
-
BigDecimal
-
BigInteger
-
File
-
InetAddress
-
ObjectName
-
URI
-
URL
-
Pattern
-
LocalDateTime
-
ZonedDateTime
Mapping complex objects
The conversion from property to object uses the Dot notation.
For example, assuming the method parameter was configured with @Option("file")
:
file.path = /home/user/input.csv
file.format = CSV
matches
public class FileOptions {
@Option("path")
private File path;
@Option("format")
private Format format;
}
List case
Lists rely on an indexed syntax to define their elements.
For example, assuming that the list parameter is named files
and that the elements are of the  FileOptions
type, you can define a list of two elements as follows:
files[0].path = /home/user/input1.csv
files[0].format = CSV
files[1].path = /home/user/input2.xml
files[1].format = EXCEL
if you desire to override a config to truncate an array, use the index length , for example to truncate previous example to only CSV, you can set:
|
files[length] = 1
Map case
Similarly to the list case, the map uses .key[index]
and .value[index]
to represent its keys and values:
// Map<String, FileOptions>
files.key[0] = first-file
files.value[0].path = /home/user/input1.csv
files.value[0].type = CSV
files.key[1] = second-file
files.value[1].path = /home/user/input2.xml
files.value[1].type = EXCEL
// Map<FileOptions, String>
files.key[0].path = /home/user/input1.csv
files.key[0].type = CSV
files.value[0] = first-file
files.key[1].path = /home/user/input2.xml
files.key[1].type = EXCEL
files.value[1] = second-file
Avoid using the Map type. Instead, prefer configuring your component with an object if this is possible. |
Defining Constraints and validations on the configuration
You can use metadata to specify that a field is required or has a minimum size, and so on. This is done using the validation
metadata in the org.talend.sdk.component.api.configuration.constraint
package:
MaxLength
Ensure the decorated option size is validated with a higher bound.
-
API:
@org.talend.sdk.component.api.configuration.constraint.Max
-
Name:
maxLength
-
Parameter Type:
double
-
Supported Types: —
java.lang.CharSequence
-
Sample:
{
"validation::maxLength":"12.34"
}
MinLength
Ensure the decorated option size is validated with a lower bound.
-
API:
@org.talend.sdk.component.api.configuration.constraint.Min
-
Name:
minLength
-
Parameter Type:
double
-
Supported Types: —
java.lang.CharSequence
-
Sample:
{
"validation::minLength":"12.34"
}
Pattern
Validate the decorated string with a javascript pattern (even into the Studio).
-
API:
@org.talend.sdk.component.api.configuration.constraint.Pattern
-
Name:
pattern
-
Parameter Type:
java.lang.string
-
Supported Types: —
java.lang.CharSequence
-
Sample:
{
"validation::pattern":"test"
}
Max
Ensure the decorated option size is validated with a higher bound.
-
API:
@org.talend.sdk.component.api.configuration.constraint.Max
-
Name:
max
-
Parameter Type:
double
-
Supported Types: —
java.lang.Number
—int
—short
—byte
—long
—double
—float
-
Sample:
{
"validation::max":"12.34"
}
Min
Ensure the decorated option size is validated with a lower bound.
-
API:
@org.talend.sdk.component.api.configuration.constraint.Min
-
Name:
min
-
Parameter Type:
double
-
Supported Types: —
java.lang.Number
—int
—short
—byte
—long
—double
—float
-
Sample:
{
"validation::min":"12.34"
}
Required
Mark the field as being mandatory.
-
API:
@org.talend.sdk.component.api.configuration.constraint.Required
-
Name:
required
-
Parameter Type:
-
-
Supported Types: —
java.lang.Object
-
Sample:
{
"validation::required":"true"
}
MaxItems
Ensure the decorated option size is validated with a higher bound.
-
API:
@org.talend.sdk.component.api.configuration.constraint.Max
-
Name:
maxItems
-
Parameter Type:
double
-
Supported Types: —
java.util.Collection
-
Sample:
{
"validation::maxItems":"12.34"
}
MinItems
Ensure the decorated option size is validated with a lower bound.
-
API:
@org.talend.sdk.component.api.configuration.constraint.Min
-
Name:
minItems
-
Parameter Type:
double
-
Supported Types: —
java.util.Collection
-
Sample:
{
"validation::minItems":"12.34"
}
UniqueItems
Ensure the elements of the collection must be distinct (kind of set).
-
API:
@org.talend.sdk.component.api.configuration.constraint.Uniques
-
Name:
uniqueItems
-
Parameter Type:
-
-
Supported Types: —
java.util.Collection
-
Sample:
{
"validation::uniqueItems":"true"
}
When using the programmatic API, metadata is prefixed by tcomp:: . This prefix is stripped in the web for convenience, and the table above uses the web keys.
|
Also note that these validations are executed before the runtime is started (when loading the component instance) and that the execution will fail if they don’t pass.
If it breaks your application, you can disable that validation on the JVM by setting the system property talend.component.configuration.validation.skip
to true
.
Defining datasets and datastores
Datasets and datastores are configuration types that define how and where to pull the data from. They are used at design time to create shared configurations that can be stored and used at runtime.
All connectors (input and output components) created using Talend Component Kit must reference a valid dataset. Each dataset must reference a datastore.
-
Datastore: The data you need to connect to the backend.
-
Dataset: A datastore coupled with the data you need to execute an action.
Make sure that:
These rules are enforced by the |
Defining a datastore
A datastore defines the information required to connect to a data source. For example, it can be made of:
-
a URL
-
a username
-
a password.
You can specify a datastore and its context of use (in which dataset, etc.) from the Component Kit Starter.
Make sure to modelize the data your components are designed to handle before defining datasets and datastores in the Component Kit Starter. |
Once you generate and import the project into an IDE, you can find datastores under a specific datastore
node.
Example of datastore:
package com.mycomponent.components.datastore;
@DataStore("DatastoreA") (1)
@GridLayout({ (2)
// The generated component layout will display one configuration entry per line.
// Customize it as much as needed.
@GridLayout.Row({ "apiurl" }),
@GridLayout.Row({ "username" }),
@GridLayout.Row({ "password" })
})
@Documentation("A Datastore made of an API URL, a username, and a password. The password is marked as Credential.") (3)
public class DatastoreA implements Serializable {
@Option
@Documentation("")
private String apiurl;
@Option
@Documentation("")
private String username;
@Option
@Credential
@Documentation("")
private String password;
public String getApiurl() {
return apiurl;
}
public DatastoreA setApiurl(String apiurl) {
this.apiurl = apiurl;
return this;
}
public String getUsername() {
return Username;
}
public DatastoreA setuUsername(String username) {
this.username = username;
return this;
}
public String getPassword() {
return password;
}
public DatastoreA setPassword(String password) {
this.password = password;
return this;
}
}
1 | Identifying the class as a datastore and naming it. |
2 | Defining the layout of the datastore configuration. |
3 | Defining each element of the configuration: a URL, a username, and a password. Note that the password is also marked as a credential. |
Defining a dataset
A dataset represents the inbound data. It is generally made of:
-
A datastore that defines the connection information needed to access the data.
-
A query.
You can specify a dataset and its context of use (in which input and output component it is used) from the Component Kit Starter.
Make sure to modelize the data your components are designed to handle before defining datasets and datastores in the Component Kit Starter. |
Once you generate and import the project into an IDE, you can find datasets under a specific dataset
node.
Example of dataset referencing the datastore shown above:
package com.datastorevalidation.components.dataset;
@DataSet("DatasetA") (1)
@GridLayout({
// The generated component layout will display one configuration entry per line.
// Customize it as much as needed.
@GridLayout.Row({ "datastore" })
})
@Documentation("A Dataset configuration containing a simple datastore") (2)
public class DatasetA implements Serializable {
@Option
@Documentation("Datastore")
private DatastoreA datastore;
public DatastoreA getDatastore() {
return datastore;
}
public DatasetA setDatastore(DatastoreA datastore) {
this.datastore = datastore;
return this;
}
}
1 | Identifying the class as a dataset and naming it. |
2 | Implementing the dataset and referencing DatastoreA as the datastore to use. |
Internationalizing datasets and datastores
The display name of each dataset and datastore must be referenced in the message.properties
file of the family package.
The key for dataset and datastore display names follows a defined pattern: ${family}.${configurationType}.${name}._displayName
. For example:
ComponentFamily.dataset.DatasetA._displayName=Dataset A
ComponentFamily.datastore.DatastoreA._displayName=Datastore A
These keys are automatically added for datasets and datastores defined from the Component Kit Starter.
Reusing datasets and datastores in Talend Studio
When deploying a component or set of components that include datasets and datastores to Talend Studio, a new node is created under Metadata. This node has the name of the component family that was deployed.
It allows users to create reusable configurations for datastores and datasets.
With predefined datasets and datastores, users can then quickly fill the component configuration in their jobs. They can do so by selecting Repository as Property Type and by browsing to the predefined dataset or datastore.
How to create a reusable connection in Studio
Studio will generate connection and close components auto for reusing connection function in input and output components, just need to do like this example:
@Service
public class SomeService {
@CreateConnection
public Object createConn(@Option("configuration") SomeDataStore dataStore) throws ComponentException {
Object connection = null;
//get conn object by dataStore
return conn;
}
@CloseConnection
public CloseConnectionObject closeConn() {
return new CloseConnectionObject() {
public boolean close() throws ComponentException {
Object connection = this.getConnection();
//do close action
return true;
}
};
}
}
Then the runtime mapper and processor only need to use @Connection to get the connection like this:
@Version(1)
@Icon(value = Icon.IconType.CUSTOM, custom = "SomeInput")
@PartitionMapper(name = "SomeInput")
@Documentation("the doc")
public class SomeInputMapper implements Serializable {
@Connection
SomeConnection conn;
}
How does the component server interact with datasets and datastores
The component server scans all configuration types and returns a configuration type index. This index can be used for the integration into the targeted platforms (Studio, web applications, and so on).
Dataset
Mark a model (complex object) as being a dataset.
-
API: @org.talend.sdk.component.api.configuration.type.DataSet
-
Sample:
{
"tcomp::configurationtype::name":"test",
"tcomp::configurationtype::type":"dataset"
}
Datastore
Mark a model (complex object) as being a datastore (connection to a backend).
-
API: @org.talend.sdk.component.api.configuration.type.DataStore
-
Sample:
{
"tcomp::configurationtype::name":"test",
"tcomp::configurationtype::type":"datastore"
}
DatasetDiscovery
Mark a model (complex object) as being a dataset discovery configuration.
-
API: @org.talend.sdk.component.api.configuration.type.DatasetDiscovery
-
Sample:
{
"tcomp::configurationtype::name":"test",
"tcomp::configurationtype::type":"datasetDiscovery"
}
The component family associated with a configuration type (datastore/dataset) is always the one related to the component using that configuration. |
The configuration type index is represented as a flat tree that contains all the configuration types, which themselves are represented as nodes and indexed by ID.
Every node can point to other nodes. This relation is represented as an array of edges that provides the child IDs.
As an illustration, a configuration type index for the example above can be defined as follows:
{nodes: {
"idForDstore": { datastore:"datastore data", edges:[id:"idForDset"] },
"idForDset": { dataset:"dataset data" }
}
}
Defining links between properties
If you need to define a binding between properties, you can use a set of annotations:
ActiveIf
If the evaluation of the element at the location matches value then the element is considered active, otherwise it is deactivated.
-
API:
@org.talend.sdk.component.api.configuration.condition.ActiveIf
-
Type:
if
-
Sample:
{
"condition::if::evaluationStrategy":"DEFAULT",
"condition::if::negate":"false",
"condition::if::target":"test",
"condition::if::value":"value1,value2"
}
ActiveIfs
Allows to set multiple visibility conditions on the same property.
-
API:
@org.talend.sdk.component.api.configuration.condition.ActiveIfs
-
Type:
ifs
-
Sample:
{
"condition::if::evaluationStrategy::0":"DEFAULT",
"condition::if::evaluationStrategy::1":"LENGTH",
"condition::if::negate::0":"false",
"condition::if::negate::1":"true",
"condition::if::target::0":"sibling1",
"condition::if::target::1":"../../other",
"condition::if::value::0":"value1,value2",
"condition::if::value::1":"SELECTED",
"condition::ifs::operator":"AND"
}
Where:
-
target is the element to evaluate.
-
value is the value to compare against.
-
strategy (optional) is the evaluation criteria. Possible values are:
-
CONTAINS
: Checks if a string or list of strings contains the defined value. -
DEFAULT
: Compares against the raw value. -
LENGTH
: For an array or string, evaluates the size of the value instead of the value itself.
-
-
negate (optional) defines if the test must be positive (default, set to
false
) or negative (set totrue
). -
operator (optional) is the comparison operator used to combine several conditions, if applicable. Possible values are
AND
andOR
.
The target element location is specified as a relative path to the current location, using Unix path characters.
The configuration class delimiter is /
.
The parent configuration class is specified by ..
.
Thus, ../targetProperty
denotes a property, which is located in the parent configuration class and is named targetProperty
.
When using the programmatic API, metadata is prefixed with tcomp:: . This prefix is stripped in the web for convenience, and the previous table uses the web keys.
|
For more details, refer to the related Javadocs.
ActiveIf examples
Example 1
A common use of the ActiveIf condition consists in testing if a target property has a value. To do that, it is possible to test if the length of the property value is different from 0:
-
target:
foo
- the path to the property to evaluate. -
strategy:
LENGTH
- the strategy consists here in testing the length of the property value. -
value:
0
- the length of the property value is compared to0
. -
negate:
true
- setting negate totrue
means that the strategy of the target must be different from the value defined. In this case, theLENGTH
of the value of thefoo
property must be different from0
.
{
"condition::if::target": "foo",
"condition::if::value": "0",
"condition::if::negate": "true",
"condition::if::evaluationStrategy": "LENGTH",
}
Example 2
The following example shows how to implement visibility conditions on several fields based on several checkbox configurations:
-
If the first checkbox is selected, an additional input field is displayed.
-
if the second or the third checkbox is selected, an additional input field is displayed.
-
if both second and third checkboxes are selected, an additional input field is displayed.
@GridLayout({
// the generated layout put one configuration entry per line,
// customize it as much as needed
@GridLayout.Row({ "checkbox1" }),
@GridLayout.Row({ "checkbox2" }),
@GridLayout.Row({ "checkbox3" }),
@GridLayout.Row({ "configuration4" }),
@GridLayout.Row({ "configuration5" }),
@GridLayout.Row({ "configuration6" })
})
@Documentation("A sample configuration with different visibility condition cases")
public class ActiveifProcessorProcessorConfiguration implements Serializable {
@Option
@Documentation("")
private boolean checkbox1;
@Option
@Documentation("")
private boolean checkbox2;
@Option
@Documentation("")
private boolean checkbox3;
@Option
@ActiveIf(target = "checkbox1", value = "true")
@Documentation("Active if checkbox1 is selected")
private String configuration4;
@Option
@ActiveIfs(operator = ActiveIfs.Operator.OR, value = {
@ActiveIf(target = "checkbox2", value = "true"),
@ActiveIf(target = "checkbox3", value = "true")
})
@Documentation("Active if checkbox2 or checkbox 3 are selected")
private String configuration5;
@Option
@ActiveIfs(operator = ActiveIfs.Operator.AND, value = {
@ActiveIf(target = "checkbox2", value = "true"),
@ActiveIf(target = "checkbox3", value = "true")
})
@Documentation("Active if checkbox2 and checkbox 3 are selected")
private String configuration6;
}
Adding hints about the rendering
In some cases, you may need to add metadata about the configuration to let the UI render that configuration properly.
For example, a password value that must be hidden and not a simple clear input box. For these cases - if you want to change the UI rendering - you can use a particular set of annotations:
@DefaultValue
Provide a default value the UI can use - only for primitive fields.
-
API:
@org.talend.sdk.component.api.configuration.ui.DefaultValue
@OptionsOrder
Allows to sort a class properties.
-
API:
@org.talend.sdk.component.api.configuration.ui.OptionsOrder
@AutoLayout
Request the rendered to do what it thinks is best.
-
API:
@org.talend.sdk.component.api.configuration.ui.layout.AutoLayout
@GridLayout
Advanced layout to place properties by row, this is exclusive with @OptionsOrder
.
the logic to handle forms (gridlayout names) is to use the only layout if there is only one defined, else to check if there are Main and Advanced and if at least Main exists, use them, else use all available layouts.
|
-
API:
@org.talend.sdk.component.api.configuration.ui.layout.GridLayout
@GridLayouts
Allow to configure multiple grid layouts on the same class, qualified with a classifier (name)
-
API:
@org.talend.sdk.component.api.configuration.ui.layout.GridLayouts
@HorizontalLayout
Put on a configuration class it notifies the UI an horizontal layout is preferred.
-
API:
@org.talend.sdk.component.api.configuration.ui.layout.HorizontalLayout
@VerticalLayout
Put on a configuration class it notifies the UI a vertical layout is preferred.
-
API:
@org.talend.sdk.component.api.configuration.ui.layout.VerticalLayout
@Code
Mark a field as being represented by some code widget (vs textarea for instance).
-
API:
@org.talend.sdk.component.api.configuration.ui.widget.Code
@Credential
Mark a field as being a credential. It is typically used to hide the value in the UI.
-
API:
@org.talend.sdk.component.api.configuration.ui.widget.Credential
@DateTime
Mark a field as being a date. It supports and is implicit - which means you don’t need to put that annotation on the option - for java.time.ZonedDateTime
, java.time.LocalDate
and java.time.LocalDateTime
and is unspecified for other types.
-
API:
@org.talend.sdk.component.api.configuration.ui.widget.DateTime
@Structure
Mark a List<String> or List<Object> field as being represented as the component data selector.
-
API:
@org.talend.sdk.component.api.configuration.ui.widget.Structure
@TextArea
Mark a field as being represented by a textarea(multiline text input).
-
API:
@org.talend.sdk.component.api.configuration.ui.widget.TextArea
Snippets
{
"ui::textarea":"true"
}
When using the programmatic API, metadata is prefixed with tcomp:: . This prefix is stripped in the web for convenience, and the previous table uses the web keys.
|
You can also check this example about masking credentials.
Target support should cover org.talend.core.model.process.EParameterFieldType
but you need to ensure that the web renderer is able to handle the same widgets.
Implementation samples
You can find sample working components for each of the configuration cases below:
-
ActiveIf: Add visibility conditions on some configurations.
-
Checkbox: Add checkboxes or toggles to your component.
-
Code: Allow users to enter their own code.
-
Credential: Mark a configuration as sensitive data to avoid displaying it as plain text.
-
Datastore: Add a button allowing to check the connection to a datastore.
-
Datalist: Two ways of implementing a dropdown list with predefined choices.
-
Integer: Add numeric fields to your component configuration.
-
Min/Max: Specify a minimum or a maximum value for a numeric configuration.
-
Multiselect: Add a list and allow users to select multiple elements of that list.
-
Pattern: Enforce rules based on a specific a pattern to prevent users from entering invalid values.
-
Required: Make a configuration mandatory.
-
Suggestions: Suggest possible values in a field based on what the users are entering.
-
Table: Add a table to your component configuration.
-
Textarea: Add a text area for configurations expecting long texts or values.
-
Input: Add a simple text input field to the component configuration
-
Update: Provide a button allowing to fill a part of the component configuration based on a service.
-
Validation: Specify constraints to make sure that a URL is well formed.
Component execution logic
Each type of component has its own execution logic. The same basic logic is applied to all components of the same type, and is then extended to implement each component specificities. The project generated from the starter already contains the basic logic for each component.
Talend Component Kit framework relies on several primitive components.
All components can use @PostConstruct
and @PreDestroy
annotations to initialize or release some underlying resource at the beginning and the end of a processing.
In distributed environments, class constructor are called on cluster manager nodes. Methods annotated with @PostConstruct and @PreDestroy are called on worker nodes. Thus, partition plan computation and pipeline tasks are performed on different nodes.
|
1 | The created task is a JAR file containing class information, which describes the pipeline (flow) that should be processed in cluster. |
2 | During the partition plan computation step, the pipeline is analyzed and split into stages. The cluster manager node instantiates mappers/processors, gets estimated data size using mappers, and splits created mappers according to the estimated data size. All instances are then serialized and sent to the worker node. |
3 | Serialized instances are received and deserialized. Methods annotated with @PostConstruct are called. After that, pipeline execution starts. The @BeforeGroup annotated method of the processor is called before processing the first element in chunk.After processing the number of records estimated as chunk size, the @AfterGroup annotated method of the processor is called. Chunk size is calculated depending on the environment the pipeline is processed by. Once the pipeline is processed, methods annotated with @PreDestroy are called. |
All the methods managed by the framework must be public. Private methods are ignored. |
The framework is designed to be as declarative as possible but also to stay extensible by not using fixed interfaces or method signatures. This allows to incrementally add new features of the underlying implementations. |
Internationalizing components
In common cases, you can store messages using a properties file in your component module to use internationalization.
This properties file must be stored in the same package as the related components and named Messages
. For example, org.talend.demo.MyComponent
uses org.talend.demo.Messages[locale].properties
.
This file already exists when you import a project generated from the starter.
Default components keys
Out of the box components are internationalized using the same location logic for the resource bundle. The supported keys are:
Name Pattern | Description |
---|---|
|
Display name of the family |
|
Display name of the category |
|
Display name of a configuration type (dataStore or dataSet). Important: this key is read from the family package (not the class package), to unify the localization of the metadata. |
|
Display name of an action of the family. Specifying it is optional and will default on the action name if not set. |
|
Display name of the component (used by the GUIs) |
|
Display name of the option. |
|
Equivalent to |
|
Placeholder of the option. |
|
Display name of the option using its class name. |
|
See |
|
See |
|
Display name of the |
|
Display name of tab corresponding to the layout (tab). Note that this requires the server |
Example of configuration for a component named list
and belonging to the memory
family (@Emitter(family = "memory", name = "list")
):
memory.list._displayName = Memory List
Internationalizing a configuration class
Configuration classes can be translated using the simple class name in the messages properties file. This is useful in case of common configurations shared by multiple components.
For example, if you have a configuration class as follows :
public class MyConfig {
@Option
private String host;
@Option
private int port;
}
You can give it a translatable display name by adding ${simple_class_name}.${property_name}._displayName
to Messages.properties
under the same package as the configuration class.
MyConfig.host._displayName = Server Host Name
MyConfig.host._placeholder = Enter Server Host Name...
MyConfig.port._displayName = Server Port
MyConfig.port._placeholder = Enter Server Port...
If you have a display name using the property path, it overrides the display name defined using the simple class name. This rule also applies to placeholders. |
Managing component versions and migration
If some changes impact the configuration, they can be managed through a migration handler at the component level (enabling trans-model migration support).
The @Version
annotation supports a migrationHandler
method which migrates the incoming configuration to the current model.
For example, if the filepath
configuration entry from v1 changed to location
in v2, you can remap the value in your MigrationHandler
implementation.
A best practice is to split migrations into services that you can inject in the migration handler (through constructor) rather than managing all migrations directly in the handler. For example:
// full component code structure skipped for brievity, kept only migration part
@Version(value = 3, migrationHandler = MyComponent.Migrations.class)
public class MyComponent {
// the component code...
private interface VersionConfigurationHandler {
Map<String, String> migrate(Map<String, String> incomingData);
}
public static class Migrations {
private final List<VersionConfigurationHandler> handlers;
// VersionConfigurationHandler implementations are decorated with @Service
public Migrations(final List<VersionConfigurationHandler> migrations) {
this.handlers = migrations;
this.handlers.sort(/*some custom logic*/);
}
@Override
public Map<String, String> migrate(int incomingVersion, Map<String, String> incomingData) {
Map<String, String> out = incomingData;
for (MigrationHandler handler : handlers) {
out = handler.migrate(out);
}
}
}
}
What is important to notice in this snippet is the fact that you can organize your migrations the way that best fits your component.
If you need to apply migrations in a specific order, make sure that they are sorted.
Consider this API as a migration callback rather than a migration API. Adjust the migration code structure you need behind the MigrationHandler , based on your component requirements, using service injection.
|
Difference between migrating a component configuration and a nested configuration
A nested configuration always migrates itself with any root prefix, whereas a component configuration always roots the full configuration.
For example, if your model is the following:
@Version
// ...
public class MyComponent implements Serializable {
public MyComponent(@Option("configuration") final MyConfig config) {
// ...
}
// ...
}
@DataStore
public class MyConfig implements Serializable {
@Option
private MyDataStore datastore;
}
@Version
@DataStore
public class MyDataStore implements Serializable {
@Option
private String url;
}
Then the component will see the path configuration.datastore.url
for the datastore url whereas the datastore
will see the path url
for the same property. You can see it as configuration types - @DataStore
, @DataSet
- being
configured with an empty root path.
Implementing batch processing
What is batch processing
Batch processing refers to the way execution environments process batches of data handled by a component using a grouping mechanism.
By default, the execution environment of a component automatically decides how to process groups of records and estimates an optimal group size depending on the system capacity. With this default behavior, the size of each group could sometimes be optimized for the system to handle the load more effectively or to match business requirements.
For example, real-time or near real-time processing needs often imply processing smaller batches of data, but more often. On the other hand, a one-time processing without business contraints is more effectively handled with a batch size based on the system capacity.
Final users of a component developed with the Talend Component Kit that integrates the batch processing logic described in this document can override this automatic size. To do that, a maxBatchSize
option is available in the component settings and allows to set the maximum size of each group of data to process.
A component processes batch data as follows:
-
Case 1 - No
maxBatchSize
is specified in the component configuration. The execution environment estimates a group size of 4. Records are processed by groups of 4. -
Case 2 - The runtime estimates a group size of 4 but a
maxBatchSize
of 3 is specified in the component configuration. The system adapts the group size to 3. Records are processed by groups of 3.
Batch processing implementation logic
Batch processing relies on the sequence of three methods: @BeforeGroup
, @ElementListener
, @AfterGroup
, that you can customize to your needs as a component Developer.
The group size automatic estimation logic is automatically implemented when a component is deployed to a Talend application. |
Each group is processed as follows until there is no record left:
-
The
@BeforeGroup
method resets a record buffer at the beginning of each group. -
The records of the group are assessed one by one and placed in the buffer as follows: The
@ElementListener
method tests if the buffer size is greater or equal to the definedmaxBatchSize
. If it is, the records are processed. If not, then the current record is buffered. -
The previous step happens for all records of the group. Then the
@AfterGroup
method tests if the buffer is empty.
You can define the following logic in the processor configuration:
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collection;
import javax.json.JsonObject;
import org.talend.sdk.component.api.processor.AfterGroup;
import org.talend.sdk.component.api.processor.BeforeGroup;
import org.talend.sdk.component.api.processor.ElementListener;
import org.talend.sdk.component.api.processor.Processor;
@Processor(name = "BulkOutputDemo")
public class BulkProcessor implements Serializable {
private Collection<JsonObject> buffer;
@BeforeGroup
public void begin() {
buffer = new ArrayList<>();
}
@ElementListener
public void bufferize(final JsonObject object) {
buffer.add(object);
}
@AfterGroup
public void commit() {
// saves buffered records at once (bulk)
}
}
You can also use the condensed syntax for this kind of processor:
@Processor(name = "BulkOutputDemo")
public class BulkProcessor implements Serializable {
@AfterGroup
public void commit(final Collection<Record> records) {
// saves records
}
}
When writing tests for components, you can force the maxBatchSize parameter value by setting it with the following syntax: <configuration prefix>.$maxBatchSize=10 .
|
You can learn more about processors in this document.
Implementing streaming on a component
By default, input components are designed to receive a one-time batch of data to process. By enabling the streaming mode, you can instead set your component to process a continuous incoming flow of data.
When streaming is enabled on an input component, the component tries to pull data from its producer. When no data is pulled, it waits for a defined period of time before trying to pull data again, and so on. This period of time between tries is defined by a strategy.
This document explains how to configure this strategy and the cases where it can fit your needs.
Choosing between batch and streaming
Before enabling streaming on your component, make sure that it fits the scope and requirements of your project and that regular batch processing cannot be used instead.
Streaming is designed to help you dealing with real-time or near real-time data processing cases, and should be used only for such cases. Enabling streaming will impact the performance when processing batches of data.
Enabling streaming from the Component Kit starter
You can enable streaming right from the design phase of the project by enabling the Stream toggle in the basic configuration of your future component in the Component Kit Starter.
Doing so adds a default streaming-ready configuration to your component when generating the project.
This default configuration implements a constant pause duration of 500 ms between retries, with no limit of retries.
Configuring streaming from the project
If streaming was not enabled at all during the project generation or if you need to implement a more specific configuration, you can change the default settings according to your needs:
-
Add the
infinite=true
parameter to your component class. -
Define the number of retries allowed in the component family LocalConfiguration, using the
talend.input.streaming.retry.maxRetries
parameter. It is set by default toInteger.MAX_VALUE
. -
Define the pausing strategy between retries in the component family
LocalConfiguration
, using thetalend.input.streaming.retry.strategy
parameter. Possible values are:-
constant
(default). It sets a constant pause duration between retries. -
exponential
. It sets an exponential backoff pause duration.See the tables below for more details about each strategy.
-
Constant strategy
Parameter | Description | Default value |
---|---|---|
|
Pause duration for the |
|
Exponential strategy
Parameter | Description | Default value |
---|---|---|
|
Exponent of the exponential calculation. |
|
|
Randomization factor used in the calculation. |
|
|
Maximum pausing duration between two retries. |
|
|
Initial backoff value. |
|
The values of these parameters are then used in the following calculations to determine the exact pausing duration between two retries.
For more clarity in the formulas below, parameter names have been replaced with variables. |
First, the current interval duration is calculated:
\$A = min(B xx E^I, F)\$
Where:
-
A: currentIntervalMillis
-
B: initialBackOff
-
E: exponent
-
I: current number of retries
-
F: maxDuration
Then, from the current interval duration, the next interval duration is calculated:
\$D = min(F, A + ((R xx 2-1) xx C xx A))\$
Where:
-
D: nextBackoffMillis
-
F: maxDuration
-
A: currentIntervalMillis
-
R: random
-
C: randomizationFactor
Building components with Maven
To develop new components, Talend Component Kit requires a build tool in which you will import the component project generated from the starter.
You will then be able to install and deploy it to Talend applications. A Talend Component Kit plugin is available for each of the supported build tools.
talend-component-maven-plugin helps you write components that match best practices and generate transparently metadata used by Talend Studio.
You can use it as follows:
<plugin>
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${component.version}</version>
</plugin>
This plugin is also an extension so you can declare it in your build/extensions
block as:
<extension>
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${component.version}</version>
</extension>
Used as an extension, the goals detailed in this document will be set up.
Maven lifecycle
The Talend Component Kit plugin integrates some specific goals within Maven build lifecycle.
For example, to compile the project and prepare for deploying your component, run mvn clean install
. Using this command, the following goals are executed:
The build is split into several phases. The different goals are executed in the order shown above. Talend Component Kit uses default goals from the Maven build lifecycle and adds additional goals to the building and packaging phases.
Goals added to the build by Talend Component Kit are detailed below. The default lifecycle is detailed in Maven documentation.
Talend Component Kit Maven goals
The Talend Component Kit plugin for Maven integrates several specific goals into Maven build lifecycle.
To run specific goals individually, run the following command from the root of the project, by adapting it with each goal name, parameters and values:
$ mvn talend-component:<name_of_the_goal>[:<execution id>] -D<param_user_property>=<param_value>
Dependencies
The first goal is a shortcut for the maven-dependency-plugin. It creates the TALEND-INF/dependencies.txt
file with the compile
and runtime
dependencies, allowing the component to use it at runtime:
<plugin>
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${component.version}</version>
<executions>
<execution>
<id>talend-dependencies</id>
<goals>
<goal>dependencies</goal>
</goals>
</execution>
</executions>
</plugin>
Scan
The scan-descriptor
goal scans the current module and optionally other configured folders to precompute the list of interesting classes for the framework (components, services). It allows to save some bootstrap time when launching a job, which can be useful in some execution cases:
<plugin>
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${component.version}</version>
<executions>
<execution>
<id>talend-scan-descriptor</id>
<goals>
<goal>scan-descriptor</goal>
</goals>
</execution>
</executions>
</plugin>
Configuration - excluding parameters used by default only:
Name | Description | User property | Default |
---|---|---|---|
output |
Where to dump the scan result. Note: It is not supported to change that value in the runtime. |
|
|
scannedDirectories |
Explicit list of directories to scan. |
|
If not set, defaults to |
scannedDependencies |
Explicit list of dependencies to scan - set them in the |
|
- |
SVG2PNG
The svg2png
goal scans a directory - default to target/classes/icons
- to find .svg
files and copy them in a PNG version size at 32x32px and named with the suffix _icon32.png
to enable the studio to read it:
<plugin>
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${component.version}</version>
<executions>
<execution>
<id>talend-svg2png</id>
<goals>
<goal>svg2png</goal>
</goals>
</execution>
</executions>
</plugin>
Configuration:
Name | Description | User property | Default |
---|---|---|---|
icons |
Where to scan for the SVG icons to convert in PNG. |
talend.icons.source |
|
workarounds |
By default the shape of the icon will be enforce in the RGB channels (in white) using the alpha as reference. This is useful for black/white images using alpha to shape the picture because Eclipse - Talend Studio - caches icons using RGB but not alpha channel, pictures not using alpha channel to draw their shape should disable that workaround. |
talend.icons.workaround |
|
if you use that plugin, ensure to set it before the validate mojo otherwise validation can miss some png files. |
Validating the component programming model
This goal helps you validate the common programming model of the component. To activate it, you can use following execution definition:
<plugin>
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${component.version}</version>
<executions>
<execution>
<id>talend-component-validate</id>
<goals>
<goal>validate</goal>
</goals>
</execution>
</executions>
</plugin>
It is bound to the process-classes
phase by default. When executed, it performs several validations that can be disabled by setting the corresponding flags to false
in the <configuration>
block of the execution:
Name | Description | User property | Default |
---|---|---|---|
validateInternationalization |
Validates that resource bundles are presents and contain commonly used keys (for example, |
|
true |
validateModel |
Ensures that components pass validations of the |
|
true |
validateSerializable |
Ensures that components are |
|
true |
validateMetadata |
Ensures that components have an |
|
true |
validateDataStore |
Ensures that any |
|
true |
validateDataSet |
Ensures that any |
|
true |
validateComponent |
Ensures that the native programming model is respected. You can disable it when using another programming model like Beam. |
|
true |
validateActions |
Validates action signatures for actions not tolerating dynamic binding ( |
|
true |
validateFamily |
Validates the family by verifying that the package containing the |
|
true |
validateDocumentation |
Ensures that all components and |
|
true |
validateLayout |
Ensures that the layout is referencing existing options and properties. |
|
true |
validateOptionNames |
Ensures that the option names are compliant with the framework. It is highly recommended and safer to keep it set to |
|
true |
validateLocalConfiguration |
Ensures that if any |
|
true |
validateOutputConnection |
Ensures that an output has only one input branch. |
|
true |
validatePlaceholder |
Ensures that string options have a placeholder. It is highly recommended to turn this property on. |
|
false |
locale |
The locale used to validate internationalization. |
|
root |
Generating the component documentation
The asciidoc
goal generates an Asciidoc file documenting your component from the configuration model (@Option
) and the @Documentation
property that you can add to options and to the component itself.
<plugin>
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${component.version}</version>
<executions>
<execution>
<id>talend-component-documentation</id>
<goals>
<goal>asciidoc</goal>
</goals>
</execution>
</executions>
</plugin>
Name | Description | User property | Default |
---|---|---|---|
level |
Level of the root title. |
|
2 ( |
output |
Output folder path. It is recommended to keep it to the default value. |
|
|
formats |
Map of the renderings to do. Keys are the format ( |
|
- |
attributes |
Asciidoctor attributes to use for the rendering when formats is set. |
|
- |
templateEngine |
Template engine configuration for the rendering. |
|
- |
templateDir |
Template directory for the rendering. |
|
- |
title |
Document title. |
|
${project.name} |
version |
The component version. It defaults to the pom version |
|
${project.version} |
workDir |
The template directory for the Asciidoctor rendering - if 'formats' is set. |
|
${project.build.directory}/talend-component/workdir |
attachDocumentations |
Allows to attach (and deploy) the documentations ( |
|
true |
htmlAndPdf |
If you use the plugin as an extension, you can add this property and set it to |
|
false |
Rendering your documentation
To render the generated documentation in HTML or PDF, you can use the Asciidoctor Maven plugin (or Gradle equivalent). You can configure both executions if you want both HTML and PDF renderings.
Make sure to execute the rendering after the documentation generation.
HTML rendering
If you prefer a HTML rendering, you can configure the following execution in the asciidoctor plugin. The example below:
-
Generates the components documentation in
target/classes/TALEND-INF/documentation.adoc
. -
Renders the documentation as an HTML file stored in
target/documentation/documentation.html
.
<plugin> (1)
<groupId>org.talend.sdk.component</groupId>
<artifactId>talend-component-maven-plugin</artifactId>
<version>${talend-component-kit.version}</version>
<executions>
<execution>
<id>documentation</id>
<phase>prepare-package</phase>
<goals>
<goal>asciidoc</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin> (2)
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.7</version>
<executions>
<execution>
<id>doc-html</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<sourceDirectory>${project.build.outputDirectory}/TALEND-INF</sourceDirectory>
<sourceDocumentName>documentation.adoc</sourceDocumentName>
<outputDirectory>${project.build.directory}/documentation</outputDirectory>
<backend>html5</backend>
</configuration>
</execution>
</executions>
</plugin>
PDF rendering
If you prefer a PDF rendering, you can configure the following execution in the asciidoctor plugin:
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.7</version>
<executions>
<execution>
<id>doc-html</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<sourceDirectory>${project.build.outputDirectory}/TALEND-INF</sourceDirectory>
<sourceDocumentName>documentation.adoc</sourceDocumentName>
<outputDirectory>${project.build.directory}/documentation</outputDirectory>
<backend>pdf</backend>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj-pdf</artifactId>
<version>1.5.0-alpha.16</version>
</dependency>
</dependencies>
</plugin>
Including the documentation into a document
If you want to add some more content or a title, you can include the generated document into
another document using Asciidoc include
directive.
For example:
= Super Components
Super Writer
:toc:
:toclevels: 3
:source-highlighter: prettify
:numbered:
:icons: font
:hide-uri-scheme:
:imagesdir: images
include::{generated_doc}/documentation.adoc[]
To be able to do that, you need to pass the generated_doc
attribute to the plugin. For example:
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>1.5.7</version>
<executions>
<execution>
<id>doc-html</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<sourceDirectory>${project.basedir}/src/main/asciidoc</sourceDirectory>
<sourceDocumentName>my-main-doc.adoc</sourceDocumentName>
<outputDirectory>${project.build.directory}/documentation</outputDirectory>
<backend>html5</backend>
<attributes>
<generated_adoc>${project.build.outputDirectory}/TALEND-INF</generated_adoc>
</attributes>
</configuration>
</execution>
</executions>
</plugin>
This is optional but allows to reuse Maven placeholders to pass paths, which can be convenient in an automated build.
You can find more customization options on Asciidoctor website.
Testing a component web rendering
Testing the rendering of your component configuration into the Studio requires deploying the component in Talend Studio. Refer to the Studio documentation.
In the case where you need to deploy your component into a Cloud (web) environment, you can test its web rendering by using the web
goal of the plugin:
-
Run the
mvn talend-component:web
command. -
Open the following URL in a web browser:
localhost:8080
. -
Select the component form you want to see from the treeview on the left. The selected form is displayed on the right.
Two parameters are available with the plugin:
-
serverPort
, which allows to change the default port (8080) of the embedded server. Its associated user property istalend.web.port
. -
serverArguments
, that you can use to pass Meecrowave options to the server. Learn more about that configuration at openwebbeans.apache.org/meecrowave/meecrowave-core/cli.html.
Make sure to install the artifact before using this command because it reads the component JAR from the local Maven repository. |
Finally, you can switch the lang of the component UI (documentation, form) using language
query parameter in the webapp.
For instance localhost:8080?language=fr
.
Changing the UI bundle
If you built a custom UI (JS + CSS) bundle and want to test it in the web application, you can configure it in the pom.xml
file as follows:
<configuration>
<uiConfiguration>
<jsLocation>https://cdn.talend.com/myapp.min.js</jsLocation>
<cssLocation>https://cdn.talend.com/myapp.min.css</cssLocation>
</uiConfiguration>
</configuration>
This is an advanced feature designed for expert users. Use it with caution. |
Generating the component archive
Component ARchive (.car) is the way to bundle a component to share it in the Talend ecosystem. It is an executable Java ARchive (.jar) containing a metadata file and a nested Maven repository containing the component and its dependencies.
mvn talend-component:car
This command creates a .car file in your build directory. This file can be shared on Talend platforms.
This command has some optional parameters:
Name | Description | User property | Default |
---|---|---|---|
attach |
Specifies whether the component archive should be attached. |
|
true |
classifier |
The classifier to use if attach is set to true. |
|
component |
metadata |
Additional custom metadata to bundle in the component archive. |
- |
- |
output |
Specifies the output path and name of the archive |
|
${project.build.directory}/${project.build.finalName}.car |
packaging |
Specifies the packaging |
- |
${project.packaging} |
This CAR is executable and exposes the studio-deploy
command which takes
a Talend Studio home path as parameter. When executed, it installs the dependencies into the Studio and registers the component in your instance. For example:
# for a studio
java -jar mycomponent.car studio-deploy /path/to/my/studio
or
java -jar mycomponent.car studio-deploy --location /path/to/my/studio
# for a m2 provisioning
java -jar mycomponent.car maven-deploy /path/to/.m2/repository
or
java -jar mycomponent.car maven-deploy --location /path/to/.m2/repository
You can also upload the dependencies to your Nexus server using the following command:
java -jar mycomponent.car deploy-to-nexus --url <nexus url> --repo <repository name> --user <username> --pass <password> --threads <parallel threads number> --dir <temp directory>
In this command, Nexus URL and repository name are mandatory arguments. All other arguments are optional. If arguments contain spaces or special symbols, you need to quote the whole value of the argument. For example:
--pass "Y0u will \ not G4iess i' ^"
Deploying to the Studio
The deploy-in-studio
goal deploys the current component module into a local Talend Studio instance.
Name | Description | User property | Default |
---|---|---|---|
studioHome |
Path to the Studio home directory |
|
- |
studioM2 |
Path to the Studio maven repository if not the default one |
|
- |
You can use the following command from the root folder of your project:
$ mvn talend-component:deploy-in-studio -Dtalend.component.studioHome="<studio_path>"
Help
The help
goal displays help information on talend-component-maven-plugin
.
Call mvn talend-component:help -Ddetail=true -Dgoal=<goal-name>
to display the parameter details of a specific goal.
Name | Description | User property | Default |
---|---|---|---|
detail |
Displays all settable properties for each goal. |
|
false |
goal |
The name of the goal for which to show help. If unspecified, all goals are displayed. |
|
- |
indentSize |
Number of spaces per indentation level. This integer should be positive. |
|
2 |
lineLength |
Maximum length of a display line. This integer should be positive. |
|
80 |
Building components with Gradle
To develop new components, Talend Component Kit requires a build tool in which you will import the component project generated from the starter. With this build tool, you will also be able to implement the logic of your component and to install and deploy it to Talend applications. A Talend Component Kit plugin is available for each of the supported build tools.
gradle-talend-component helps you write components that match the best practices. It is inspired from the Maven plugin and adds the ability to generate automatically the dependencies.txt
file used by the SDK to build the component classpath. For more information on the configuration, refer to the Maven properties matching the attributes.
By default, Gradle does not log information messages. To see messages, use --info
in your commands. Refer to Gradle’s documentation to learn about log levels.
You can use it as follows:
buildscript {
repositories {
mavenLocal()
mavenCentral()
}
dependencies {
classpath "org.talend.sdk.component:gradle-talend-component:${talendComponentVersion}"
}
}
apply plugin: 'org.talend.sdk.component'
apply plugin: 'java'
// optional customization
talendComponentKit {
// dependencies.txt generation, replaces maven-dependency-plugin
dependenciesLocation = "TALEND-INF/dependencies.txt"
boolean skipDependenciesFile = false;
// classpath for validation utilities
sdkVersion = "${talendComponentVersion}"
apiVersion = "${talendComponentApiVersion}"
// documentation
skipDocumentation = false
documentationOutput = new File(....)
documentationLevel = 2 // first level will be == in the generated .adoc
documentationTitle = 'My Component Family' // defaults to ${project.name}
documentationAttributes = [:] // adoc attributes
documentationFormats = [:] // renderings to do
documentationVersion = 1.1 // defaults to the .pom version
// validation
skipValidation = false
validateFamily = true
validateSerializable = true
validateInternationalization = true
validateModel = true
validateOptionNames = true
validateMetadata = true
validateComponent = true
validateDataStore = true
validateDataSet = true
validateActions = true
validateLocalConfiguration = true
validateOutputConnection = true
validateLayout = true
validateDocumentation = true
// web
serverArguments = []
serverPort = 8080
// car
carAttach = true
carClassifier = component // classifier to use if carAttach is set to true
carOutput = new File(....)
carMetadata = [:] // custom meta (string key-value pairs)
carPackaging = ${project.packaging}
// deploy-in-studio
studioHome = "C:\<pathToSutdio>"
// svg2png
icons = 'resources/main/icons'
useIconWorkarounds = true
}
Wrapping a Beam I/O
Limitations
This part is limited to specific kinds of Beam PTransform
:
-
PTransform<PBegin, PCollection<?>>
for inputs. -
PTransform<PCollection<?>, PDone>
for outputs. Outputs must use a single (composite or not)DoFn
in theirapply
method.
Wrapping an input
To illustrate the input wrapping, this procedure uses the following input as a starting point (based on existing Beam inputs):
@AutoValue
public abstract [static] class Read extends PTransform<PBegin, PCollection<String>> {
// config
@Override
public PCollection<String> expand(final PBegin input) {
return input.apply(
org.apache.beam.sdk.io.Read.from(new BoundedElasticsearchSource(this, null)));
}
// ... other transform methods
}
To wrap the Read
in a framework component, create a transform delegating to that Read with at least a @PartitionMapper
annotation and using @Option
constructor injections to configure the component. Also make sure to follow the best practices and to specify @Icon
and @Version
.
@PartitionMapper(family = "myfamily", name = "myname")
public class WrapRead extends PTransform<PBegin, PCollection<String>> {
private PTransform<PBegin, PCollection<String>> delegate;
public WrapRead(@Option("dataset") final WrapReadDataSet dataset) {
delegate = TheIO.read().withConfiguration(this.createConfigurationFrom(dataset));
}
@Override
public PCollection<String> expand(final PBegin input) {
return delegate.expand(input);
}
// ... other methods like the mapping with the native configuration (createConfigurationFrom)
}
Wrapping an output
To illustrate the output wrapping, this procedure uses the following output as a starting point (based on existing Beam outputs):
@AutoValue
public abstract [static] class Write extends PTransform<PCollection<String>, PDone> {
// configuration withXXX(...)
@Override
public PDone expand(final PCollection<String> input) {
input.apply(ParDo.of(new WriteFn(this)));
return PDone.in(input.getPipeline());
}
// other methods of the transform
}
You can wrap this output exactly the same way you wrap an input, but using @Processor
instead of:
@Processor(family = "myfamily", name = "myname")
public class WrapWrite extends PTransform<PCollection<String>, PDone> {
private PTransform<PCollection<String>, PDone> delegate;
public WrapWrite(@Option("dataset") final WrapWriteDataSet dataset) {
delegate = TheIO.write().withConfiguration(this.createConfigurationFrom(dataset));
}
@Override
public PDone expand(final PCollection<String> input) {
return delegate.expand(input);
}
// ... other methods like the mapping with the native configuration (createConfigurationFrom)
}
Tip
Note that the org.talend.sdk.component.runtime.beam.transform.DelegatingTransform
class fully delegates the "expansion" to another transform. Therefore, you can extend it and implement the configuration mapping:
@Processor(family = "beam", name = "file")
public class BeamFileOutput extends DelegatingTransform<PCollection<String>, PDone> {
public BeamFileOutput(@Option("output") final String output) {
super(TextIO.write()
.withSuffix("test")
.to(FileBasedSink.convertToFileResourceIfPossible(output)));
}
}
Advanced
In terms of classloading, when you write an I/O, the Beam SDK Java core stack is assumed as provided in Talend Component Kit runtime. This way, you don’t need to include it in the compile scope, it would be ignored anyway.
Coder
If you need a JSonCoder, you can use the org.talend.sdk.component.runtime.beam.factory.service.PluginCoderFactory
service,
which gives you access to the JSON-P and JSON-B coders.
There is also an Avro coder, which uses the FileContainer
. It ensures it
is self-contained for IndexedRecord
and it does not require—as the default Apache Beam AvroCoder
—to set the schema when creating a pipeline.
It consumes more space and therefore is slightly slower, but it is fine for DoFn
, since it does not rely on serialization in most cases.
See org.talend.sdk.component.runtime.beam.transform.avro.IndexedRecordCoder
.
JsonObject to IndexedRecord
If your PCollection
is made of JsonObject
records, and you want to convert them to IndexedRecord
, you can use the following PTransforms
:
IndexedRecordToJson
-
converts an
IndexedRecord
to aJsonObject
. JsonToIndexedRecord
-
converts a
JsonObject
to anIndexedRecord
. SchemalessJsonToIndexedRecord
-
converts a
JsonObject
to anIndexedRecord
with AVRO schema inference.
Record coder
There are two main provided coder for Record
:
FullSerializationRecordCoder
-
it will unwrap the record as an Avro
IndexedRecord
and serialize it with its schema. This can indeed have a performance impact but, due to the structure of component, it will not impact the runtime performance in general - except with direct runner - because the runners will optimize the pipeline accurately. SchemaRegistryCoder
-
it will serialize the Avro
IndexedRecord
as well but it will ensure the schema is in theSchemaRegistry
to be able to deserialize it when needed. This implementation is faster but the default implementation of the registry is "in memory" so will only work with a single worker node. You can extend it using Java SPI mecanism to use a custom distributed implementation.
Sample
Sample input based on Beam Kafka:
@Version
@Icon(Icon.IconType.KAFKA)
@Emitter(name = "Input")
@AllArgsConstructor
@Documentation("Kafka Input")
public class KafkaInput extends PTransform<PBegin, PCollection<Record>> { (1)
private final InputConfiguration configuration;
private final RecordBuilderFactory builder;
private final PluginCoderFactory coderFactory;
private KafkaIO.Read<byte[], byte[]> delegate() {
final KafkaIO.Read<byte[], byte[]> read = KafkaIO.<byte[], byte[]> read()
.withBootstrapServers(configuration.getBootstrapServers())
.withTopics(configuration.getTopics().stream().map(InputConfiguration.Topic::getName).collect(toList()))
.withKeyDeserializer(ByteArrayDeserializer.class).withValueDeserializer(ByteArrayDeserializer.class);
if (configuration.getMaxResults() > 0) {
return read.withMaxNumRecords(configuration.getMaxResults());
}
return read;
}
@Override (2)
public PCollection<Record> expand(final PBegin pBegin) {
final PCollection<KafkaRecord<byte[], byte[]>> kafkaEntries = pBegin.getPipeline().apply(delegate());
return kafkaEntries.apply(ParDo.of(new BytesToRecord(builder))).setCoder(SchemaRegistryCoder.of()); (3)
}
@AllArgsConstructor
private static class BytesToRecord extends DoFn<KafkaRecord<byte[], byte[]>, Record> {
private final RecordBuilderFactory builder;
@ProcessElement
public void onElement(final ProcessContext context) {
context.output(toRecord(context.element()));
}
private Record toRecord(final KafkaRecord<byte[], byte[]> element) {
return builder.newRecordBuilder().add("key", element.getKV().getKey())
.add("value", element.getKV().getValue()).build();
}
}
}
1 | The PTransform generics define that the component is an input (PBegin marker). |
2 | The expand method chains the native I/O with a custom mapper (BytesToRecord ). |
3 | The mapper uses the SchemaRegistry coder automatically created from the contextual component. |
Because the Beam wrapper does not respect the standard Talend Component Kit programming model ( for example, there is no @Emitter
), you need to set the <talend.validation.component>false</talend.validation.component>
property in your pom.xml
file (or equivalent for Gradle) to skip the component programming model validations of the framework.
Talend Component Kit best practices
Organizing your code
Some recommendations apply to the way component packages are organized:
-
Make sure to create a
package-info.java
file with the component family/categories at the root of your component package:
@Components(family = "jdbc", categories = "Database")
package org.talend.sdk.component.jdbc;
import org.talend.sdk.component.api.component.Components;
-
Create a package for the configuration.
-
Create a package for the actions.
-
Create a package for the component and one sub-package by type of component (input, output, processors, and so on).
Configuring components
Serializing your configuration
It is recommended to serialize your configuration in order to be able to pass it through other components.
Input and output components
When building a new component, the first step is to identify the way it must be configured.
The two main concepts are:
-
The DataStore which is the way you can access the backend.
-
The DataSet which is the way you interact with the backend.
For example:
Example description | DataStore | DataSet |
---|---|---|
Accessing a relational database like MySQL |
JDBC driver, URL, username, password |
Query to execute, row mapper, and so on. |
Accessing a file system |
File pattern (or directory + file extension/prefix/…) |
File format, buffer size, and so on. |
It is common to have the dataset including the datastore, because both are required to work. However, it is recommended to replace this pattern by defining both dataset and datastore in a higher level configuration model. For example:
@DataSet
public class MyDataSet {
// ...
}
@DataStore
public class MyDataStore {
// ...
}
public class MyComponentConfiguration {
@Option
private MyDataSet dataset;
@Option
private MyDataStore datastore;
}
About actions
Input and output components are particular because they can be linked to a set of actions. It is recommended to wire all the actions you can apply to ensure the consumers of your component can provide a rich experience to their users.
The most common actions are the following ones:
This action exposes a way to ensure the datastore/connection works.
Configuration example:
@DataStore
@Checkable
public class JdbcDataStore
implements Serializable {
@Option
private String driver;
@Option
private String url;
@Option
private String username;
@Option
private String password;
}
Action example:
@HealthCheck
public HealthCheckStatus healthCheck(@Option("datastore") JdbcDataStore datastore) {
if (!doTest(dataStore)) {
// often add an exception message mapping or equivalent
return new HealthCheckStatus(Status.KO, "Test failed");
}
return new HealthCheckStatus(Status.KO, e.getMessage());
}
Processor components
Configuring processor components is simpler than configuring input and output components because it is specific for each component. For example, a mapper takes the mapping between the input and output models:
public class MappingConfiguration {
@Option
private Map<String, String> fieldsMapping;
@Option
private boolean ignoreCase;
//...
}
Handling UI interactions
It is recommended to provide as much information as possible to let the UI work with the data during its edition.
Validations
Light validations
Light validations are all the validations you can execute on the client side. They are listed in the UI hint section.
Use light validations first before going with custom validations because they are more efficient.
Custom validations
Custom validations enforce custom code to be executed, but are heavier to execute.
Prefer using light validations when possible. |
Define an action with the parameters needed for the validation and link the option you want to validate to this action. For example, to validate a dataset for a JDBC driver:
// ...
public class JdbcDataStore
implements Serializable {
@Option
@Validable("driver")
private String driver;
// ...
}
@AsyncValidation("driver")
public ValidationResult validateDriver(@Option("value") String driver) {
if (findDriver(driver) != null) {
return new ValidationResult(Status.OK, "Driver found");
}
return new ValidationResult(Status.KO, "Driver not found");
}
You can also define a Validable class and use it to validate a form by setting it on your whole configuration:
// Note: some parts of the API were removed for clarity
public class MyConfiguration {
// a lot of @Options
}
public MyComponent {
public MyComponent(@Validable("configuration") MyConfiguration config) {
// ...
}
//...
}
@AsyncValidation("configuration")
public ValidationResult validateDriver(@Option("value") MyConfiguration configuration) {
if (isValid(configuration)) {
return new ValidationResult(Status.OK, "Configuration valid");
}
return new ValidationResult(Status.KO, "Driver not valid ${because ...}");
}
The parameter binding of the validation method uses the same logic as the component configuration injection. Therefore, the @Option method specifies the prefix to use to reference a parameter.It is recommended to use @Option("value") until you know exactly why you don’t use it. This way, the consumer can match the configuration model and just prefix it with value. to send the instance to validate.
|
Validations are triggers based on "events". If you mark part of a configuration as @Validable
but this configuration is translated to a widget without any interaction, then no validation will happen. The rule of thumb is to mark only
primitives and simple types (list of primitives) as @Validable
.
Completion
It can be handy and user-friendly to provide completion on some fields. For example, to define completion for available drivers:
// ...
public class JdbcDataStore
implements Serializable {
@Option
@Completable("driver")
private String driver;
// ...
}
@Completion("driver")
public CompletionList findDrivers() {
return new CompletionList(findDriverList());
}
Component representation
Each component must have its own icon:
@Icon(Icon.IconType.DB_INPUT)
@PartitionMapper(family = "jdbc", name = "input")
public class JdbcPartitionMapper
implements Serializable {
}
You can use talend.surge.sh/icons/ to find the icon you want to use. |
Enforcing versioning on components
It is recommended to enforce the version of your component, event though it is not mandatory for the first version.
@Version(1)
@PartitionMapper(family = "jdbc", name = "input")
public class JdbcPartitionMapper
implements Serializable {
}
If you break a configuration entry in a later version; make sure to:
-
Upgrade the version.
-
Support a migration of the configuration.
@Version(value = 2, migrationHandler = JdbcPartitionMapper.Migrations.class)
@PartitionMapper(family = "jdbc", name = "input")
public class JdbcPartitionMapper
implements Serializable {
public static class Migrations implements MigrationHandler {
// implement your migration
}
}
Component Loading
Talend Component scanning is based on plugins. To make sure that plugins can be developed in parallel and avoid conflicts, they need to be isolated (component or group of components in a single jar/plugin).
Multiple options are available:
-
Graph classloading: this option allows you to link the plugins and dependencies together dynamically in any direction.
For example, the graph classloading can be illustrated by OSGi containers. -
Tree classloading: a shared classloader inherited by plugin classloaders. However, plugin classloader classes are not seen by the shared classloader, nor by other plugins.
For example, the tree classloading is commonly used by Servlet containers where plugins are web applications. -
Flat classpath: listed for completeness but rejected by design because it doesn’t comply with this requirement.
In order to avoid much complexity added by this layer, Talend Component Kit relies on a tree classloading. The advantage is that you don’t need to define the relationship with other plugins/dependencies, because it is built-in.
Here is a representation of this solution:
The shared area contains Talend Component Kit API, which only contains by default the classes shared by the plugins.
Then, each plugin is loaded with its own classloader and dependencies.
Packaging a plugin
This section explains the overall way to handle dependencies but the Talend Maven plugin provides a shortcut for that. |
A plugin is a JAR file that was enriched with the list of its dependencies. By default, Talend Component Kit runtime is able to read the output of maven-dependency-plugin
in TALEND-INF/dependencies.txt
. You just need to make sure that your component defines the following plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>create-TALEND-INF/dependencies.txt</id>
<phase>process-resources</phase>
<goals>
<goal>list</goal>
</goals>
<configuration>
<outputFile>${project.build.outputDirectory}/TALEND-INF/dependencies.txt</outputFile>
</configuration>
</execution>
</executions>
</plugin>
Once build, check the JAR file and look for the following lines:
$ unzip -p target/mycomponent-1.0.0-SNAPSHOT.jar TALEND-INF/dependencies.txt
The following files have been resolved:
org.talend.sdk.component:component-api:jar:1.0.0-SNAPSHOT:provided
org.apache.geronimo.specs:geronimo-annotation_1.3_spec:jar:1.0:provided
org.superbiz:awesome-project:jar:1.2.3:compile
junit:junit:jar:4.12:test
org.hamcrest:hamcrest-core:jar:1.3:test
What is important to see is the scope related to the artifacts:
-
The APIs (component-api and geronimo-annotation_1.3_spec) are
provided
because you can consider them to be there when executing (they come with the framework). -
Your specific dependencies (
awesome-project
in the example above) are marked ascompile
: they are included as needed dependencies by the framework (note that usingruntime
works too). -
the other dependencies are ignored. For example,
test
dependencies.
Packaging an application
Even if a flat classpath deployment is possible, it is not recommended because it would then reduce the capabilities of the components.
Dependencies
The way the framework resolves dependencies is based on a local Maven repository layout. As a quick reminder, it looks like:
.
├── groupId1
│  └── artifactId1
│  ├── version1
│  │  └── artifactId1-version1.jar
│  └── version2
│    └── artifactId1-version2.jar
└── groupId2
  └── artifactId2
  └── version1
    └── artifactId2-version1.jar
This is all the layout the framework uses. The logic converts t-uple {groupId, artifactId, version, type (jar)}
to the path in the repository.
Talend Component Kit runtime has two ways to find an artifact:
-
From the file system based on a configured Maven 2 repository.
-
From a fat JAR (uber JAR) with a nested Maven repository under
MAVEN-INF/repository
.
The first option uses either ${user.home}/.m2/repository
(default) or a specific path configured when creating a ComponentManager
.
The nested repository option needs some configuration during the packaging to ensure the repository is correctly created.
Creating a nested Maven repository with maven-shade-plugin
To create the nested MAVEN-INF/repository
repository, you can use the nested-maven-repository
extension:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer implementation="org.talend.sdk.component.container.maven.shade.ContainerDependenciesTransformer">
<session>${session}</session>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>nested-maven-repository</artifactId>
<version>${the.plugin.version}</version>
</dependency>
</dependencies>
</plugin>
Listing needed plugins
Plugins are usually programmatically registered. If you want to make some of them automatically available, you need to generate a TALEND-INF/plugins.properties
file that maps a plugin name to coordinates found with the Maven mechanism described above.
You can enrich maven-shade-plugin
to do it:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer implementation="org.talend.sdk.component.container.maven.shade.PluginTransformer">
<session>${session}</session>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>nested-maven-repository</artifactId>
<version>${the.plugin.version}</version>
</dependency>
</dependencies>
</plugin>
maven-shade-plugin extensions
Here is a final job/application bundle based on maven-shade-plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.1</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/.SF</exclude>
<exclude>META-INF/.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<shadedClassifierName>shaded</shadedClassifierName>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
<transformer
implementation="org.talend.sdk.component.container.maven.shade.ContainerDependenciesTransformer">
<session>${session}</session>
<userArtifacts>
<artifact>
<groupId>org.talend.sdk.component</groupId>
<artifactId>sample-component</artifactId>
<version>1.0</version>
<type>jar</type>
</artifact>
</userArtifacts>
</transformer>
<transformer implementation="org.talend.sdk.component.container.maven.shade.PluginTransformer">
<session>${session}</session>
<userArtifacts>
<artifact>
<groupId>org.talend.sdk.component</groupId>
<artifactId>sample-component</artifactId>
<version>1.0</version>
<type>jar</type>
</artifact>
</userArtifacts>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>nested-maven-repository</artifactId>
<version>${the.version}</version>
</dependency>
</dependencies>
</plugin>
The configuration unrelated to transformers depends on your application. |
ContainerDependenciesTransformer
embeds a Maven repository and PluginTransformer
to create a file that lists (one per line) artifacts (representing plugins).
Both transformers share most of their configuration:
-
session
: must be set to${session}
. This is used to retrieve dependencies. -
scope
: a comma-separated list of scopes to include in the artifact filtering (note that the default will rely onprovided
but you can replace it bycompile
,runtime
,runtime+compile
,runtime+system
ortest
). -
include
: a comma-separated list of artifacts to include in the artifact filtering. -
exclude
: a comma-separated list of artifacts to exclude in the artifact filtering. -
userArtifacts
: set of artifacts to include (groupId, artifactId, version, type - optional, file - optional for plugin transformer, scope - optional) which can be forced inline. This parameter is mainly useful forÂPluginTransformer
. -
includeTransitiveDependencies
: should transitive dependencies of the components be included. Set totrue
by default. It is active foruserArtifacts
. -
includeProjectComponentDependencies
: should component project dependencies be included. Set tofalse
by default. It is not needed when a job project uses isolation for components.
With the component tooling, it is recommended to keep default locations. Also if you need to use project dependencies, you can need to refactor your project structure to ensure component isolation. Talend Component Kit lets you handle that part but the recommended practice is to use userArtifacts for the components instead of project <dependencies> .
|
ContainerDependenciesTransformer
ContainerDependenciesTransformer
specific configuration is as follows:
-
repositoryBase
: base repository location (MAVEN-INF/repository
by default). -
ignoredPaths
: a comma-separated list of folders not to create in the output JAR. This is common for folders already created by other transformers/build parts.
PluginTransformer
ContainerDependenciesTransformer
specific configuration is the following one:
-
pluginListResource
: base repository location (default toTALEND-INF/plugins.properties
).
For example, if you want to list only the plugins you use, you can configure this transformer as follows:
<transformer implementation="org.talend.sdk.component.container.maven.shade.PluginTransformer">
<session>${session}</session>
<include>org.talend.sdk.component:component-x,org.talend.sdk.component:component-y,org.talend.sdk.component:component-z</include>
</transformer>
Component scanning rules and default exclusions
The framework uses two kind of filterings when scanning your component.
One based on the JAR content - the presence of TALEND-INF/dependencies.txt
and one based on the package name.
Make sure that your component definitions (including services) are in a scanned module if they are not registered manually using ComponentManager.instance().addPlugin()
, and that the component package is not excluded.
Package Scanning
Since the framework can be used in the case of fatjars or shades, and because it still uses scanning, it is important to ensure we don’t scan the whole classes for performances reason.
Therefore, the following packages are ignored:
-
avro.shaded
-
com.codehale.metrics
-
com.ctc.wstx
-
com.datastax.driver
-
com.fasterxml.jackson
-
com.google.common
-
com.google.thirdparty
-
com.ibm.wsdl
-
com.jcraft.jsch
-
com.kenai
-
com.sun.istack
-
com.sun.xml
-
com.talend.shaded
-
com.thoughtworks
-
io.jsonwebtoken
-
io.netty
-
io.swagger
-
javax
-
jnr
-
junit
-
net.sf.ehcache
-
net.shibboleth
-
org.aeonbits.owner
-
org.apache
-
org.bouncycastle
-
org.codehaus
-
org.cryptacular
-
org.eclipse
-
org.fusesource
-
org.h2
-
org.hamcrest
-
org.hsqldb
-
org.jasypt
-
org.jboss
-
org.joda
-
org.jose4j
-
org.junit
-
org.jvnet
-
org.metatype
-
org.objectweb
-
org.openejb
-
org.opensaml
-
org.slf4j
-
org.swizzle
-
org.terracotta
-
org.tukaani
-
org.yaml
-
serp
it is not recommanded but possible to add in your plugin module a
TALEND-INF/scanning.properties file with classloader.includes and
classloader.excludes entries to refine the scanning with custom rules.
In such a case, exclusions win over inclusions.
|
Testing components
Developing new components includes testing them in the required execution environments. Use the following articles to learn about the best practices and the available options to fully test your components.
Testing best practices
This section mainly concerns tools that can be used with JUnit. You can use most of these best practices with TestNG as well.
Parameterized tests
Parameterized tests are a great solution to repeat the same test multiple times. This method of testing requires defining a test scenario (I test function F
) and making the input/output data dynamic.
JUnit 4
Here is a test example, which validates a connection URI using ConnectionService
:
public class MyConnectionURITest {
@Test
public void checkMySQL() {
assertTrue(new ConnectionService().isValid("jdbc:mysql://localhost:3306/mysql"));
}
@Test
public void checkOracle() {
assertTrue(new ConnectionService().isValid("jdbc:oracle:thin:@//myhost:1521/oracle"));
}
}
The testing method is always the same. Only values are changing. It can therefore be rewritten using JUnit Parameterized
runner, as follows:
@RunWith(Parameterized.class) (1)
public class MyConnectionURITest {
@Parameterized.Parameters(name = "{0}") (2)
public static Iterable<String> uris() { (3)
return asList(
"jdbc:mysql://localhost:3306/mysql",
"jdbc:oracle:thin:@//myhost:1521/oracle");
}
@Parameterized.Parameter (4)
public String uri;
@Test
public void isValid() { (5)
assertNotNull(uri);
}
}
1 | Parameterized is the runner that understands @Parameters and how to use it. If needed, you can generate random data here. |
2 | By default the name of the executed test is the index of the data. Here, it is customized using the first toString() parameter value to have something more readable. |
3 | The @Parameters method must be static and return an array or iterable of the data used by the tests. |
4 | You can then inject the current data using the @Parameter annotation. It can take a parameter if you use an array of array instead of an iterable of object in @Parameterized . You can select which item you want to inject. |
5 | The @Test method is executed using the contextual data. In this sample, it gets executed twice with the two specified URIs. |
You don’t have to define a single @Test method. If you define multiple methods, each of them is executed with all the data. For example, if another test is added to the previous example, four tests are executed - 2 per data).
|
JUnit 5
With JUnit 5, parameterized tests are easier to use. The full documentation is available at junit.org/junit5/docs/current/user-guide/#writing-tests-parameterized-tests.
The main difference with JUnit 4 is that you can also define inline that the test method is a parameterized test as well as the values to use:
@ParameterizedTest
@ValueSource(strings = { "racecar", "radar", "able was I ere I saw elba" })
void mytest(String currentValue) {
// do test
}
However, you can still use the previous behavior with a method binding configuration:
@ParameterizedTest
@MethodSource("stringProvider")
void mytest(String currentValue) {
// do test
}
static Stream<String> stringProvider() {
return Stream.of("foo", "bar");
}
This last option allows you to inject any type of value - not only primitives - which is common to define scenarios.
Add the junit-jupiter-params dependency to benefit from this feature.
|
component-runtime-testing
component-runtime-junit
component-runtime-junit
is a test library that allows you to validate simple logic based on the Talend Component Kit tooling.
To import it, add the following dependency to your project:
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-runtime-junit</artifactId>
<version>${talend-component.version}</version>
<scope>test</scope>
</dependency>
This dependency also provides mocked components that you can use with your own component to create tests.
The mocked components are provided under the test
family:
-
emitter
: a mock of an input component -
collector
: a mock of an output component
The collector is "per thread" by default. If you are executing a Beam (or concurrent) job, it will not work.
To switch to a JVM wide storage, set the talend.component.junit.handler.state system property to static (default being thread ).
You can do it in a maven-surefire-plugin execution .
|
JUnit 4
You can define a standard JUnit test and use the SimpleComponentRule
rule:
public class MyComponentTest {
@Rule (1)
public final SimpleComponentRule components = new SimpleComponentRule("org.talend.sdk.component.mycomponent");
@Test
public void produce() {
Job.components() (2)
.component("mycomponent","yourcomponentfamily://yourcomponent?"+createComponentConfig())
.component("collector", "test://collector")
.connections()
.from("mycomponent").to("collector")
.build()
.run();
final List<MyRecord> records = components.getCollectedData(MyRecord.class); (3)
doAssertRecords(records); // depending your test
}
}
1 | The rule creates a component manager and provides two mock components: an emitter and a collector. Set the root package of your component to enable it. |
2 | Define any chain that you want to test. It generally uses the mock as source or collector. |
3 | Validate your component behavior. For a source, you can assert that the right records were emitted in the mock collect. |
The rule can also be defined as a @ClassRule to start it once per class and not per test as with @Rule .
|
To go further, you can add the ServiceInjectionRule
rule, which allows to inject all the component family services into the test class by marking test class fields with @Service
:
public class SimpleComponentRuleTest {
@ClassRule
public static final SimpleComponentRule COMPONENT_FACTORY = new SimpleComponentRule("...");
@Rule (1)
public final ServiceInjectionRule injections = new ServiceInjectionRule(COMPONENT_FACTORY, this); (2)
@Service (3)
private LocalConfiguration configuration;
@Service
private Jsonb jsonb;
@Test
public void test() {
// ...
}
}
1 | The injection requires the test instance, so it must be a @Rule rather than a @ClassRule . |
2 | The ComponentsController is passed to the rule, which for JUnit 4 is the SimpleComponentRule , as well as the test instance to inject services in. |
3 | All service fields are marked with @Service to let the rule inject them before the test is ran. |
JUnit 5
The JUnit 5 integration is very similar to JUnit 4, except that it uses the JUnit 5 extension mechanism.
The entry point is the @WithComponents
annotation that you add to your test class, and which takes the component package you want to test. You can use @Injected
to inject an instance of ComponentsHandler
- which exposes the same utilities than the JUnit 4 rule - in a test class field :
@WithComponents("org.talend.sdk.component.junit.component") (1)
public class ComponentExtensionTest {
@Injected (2)
private ComponentsHandler handler;
@Test
public void manualMapper() {
final Mapper mapper = handler.createMapper(Source.class, new Source.Config() {
{
values = asList("a", "b");
}
});
assertFalse(mapper.isStream());
final Input input = mapper.create();
assertEquals("a", input.next());
assertEquals("b", input.next());
assertNull(input.next());
}
}
1 | The annotation defines which components to register in the test context. |
2 | The field allows to get the handler to be able to orchestrate the tests. |
If you use JUnit 5 for the first time, keep in mind that the imports changed and that you need to use org.junit.jupiter.api.Test instead of org.junit.Test .
Some IDE versions and surefire versions can also require you to install either a plugin or a specific configuration.
|
As for JUnit 4, you can go further by injecting test class fields marked with @Service
, but there is no additional extension to specify in this case:
@WithComponents("...")
class ComponentExtensionTest {
@Service (1)
private LocalConfiguration configuration;
@Service
private Jsonb jsonb;
@Test
void test() {
// ...
}
}
1 | All service fields are marked with @Service to let the rule inject them before the test is ran. |
Streaming components
Streaming components have the issue to not stop by design. The Job DSL exposes two properties to help with that issue:
-
streaming.maxRecords
: enables to request a maximum number of records -
streaming.maxDurationMs
: enables to request a maximum duration for the execution of the input
You can set them as properties on the job:
job.property("streaming.maxRecords", 5);
Mocking the output
Using the test://collector
component as shown in the previous sample stores all records emitted by the chain (typically your source) in memory. You can then access them using theSimpleComponentRule.getCollectedData(type)
.
Note that this method filters by type. If you don’t need any specific type, you can use Object.class
.
Mocking the input
The input mocking is symmetric to the output. In this case, you provide the data you want to inject:
public class MyComponentTest {
@Rule
public final SimpleComponentRule components = new SimpleComponentRule("org.talend.sdk.component.mycomponent");
@Test
public void produce() {
components.setInputData(asList(createData(), createData(), createData())); (1)
Job.components()
.component("emitter","test://emitter")
.component("out", "yourcomponentfamily://myoutput?"+createComponentConfig())
.connections()
.from("emitter").to("out")
.build()
.run();
assertMyOutputProcessedTheInputData();
}
}
1 | using setInputData , you prepare the execution(s) to have a fake input when using the "test"/"emitter" component. |
Creating runtime configuration from component configuration
The component configuration is a POJO (using @Option
on fields) and the runtime configuration (ExecutionChainBuilder
) uses a Map<String, String>
. To make the conversion easier, the JUnit integration provides a SimpleFactory.configurationByExample
utility to get this map instance from a configuration instance.
Example:
final MyComponentConfig componentConfig = new MyComponentConfig();
componentConfig.setUser("....");
// .. other inits
final Map<String, String> configuration = configurationByExample(componentConfig);
The same factory provides a fluent DSL to create the configuration by calling configurationByExample
without any parameter.
The advantage is to be able to convert an object as a Map<String, String>
or as a query string
in order to use it with the Job
DSL:
final String uri = "family://component?" +
configurationByExample().forInstance(componentConfig).configured().toQueryString();
It handles the encoding of the URI to ensure it is correctly done.
When writing tests for your components, you can force the maxBatchSize parameter value by setting it with the following syntax: $configuration.$maxBatchSize=10 .
|
Testing a Mapper
The SimpleComponentRule
also allows to test a mapper unitarily. You can get an instance from a configuration and execute this instance to collect the output.
Example:
public class MapperTest {
@ClassRule
public static final SimpleComponentRule COMPONENT_FACTORY = new SimpleComponentRule(
"org.company.talend.component");
@Test
public void mapper() {
final Mapper mapper = COMPONENT_FACTORY.createMapper(MyMapper.class, new Source.Config() {{
values = asList("a", "b");
}});
assertEquals(asList("a", "b"), COMPONENT_FACTORY.collectAsList(String.class, mapper));
}
}
Testing a Processor
As for a mapper, a processor is testable unitary. However, this case can be more complex in case of multiple inputs or outputs.
Example:
public class ProcessorTest {
@ClassRule
public static final SimpleComponentRule COMPONENT_FACTORY = new SimpleComponentRule(
"org.company.talend.component");
@Test
public void processor() {
final Processor processor = COMPONENT_FACTORY.createProcessor(Transform.class, null);
final SimpleComponentRule.Outputs outputs = COMPONENT_FACTORY.collect(processor,
new JoinInputFactory().withInput("__default__", asList(new Transform.Record("a"), new Transform.Record("bb")))
.withInput("second", asList(new Transform.Record("1"), new Transform.Record("2")))
);
assertEquals(2, outputs.size());
assertEquals(asList(2, 3), outputs.get(Integer.class, "size"));
assertEquals(asList("a1", "bb2"), outputs.get(String.class, "value"));
}
}
The rule allows you to instantiate a Processor
from your code, and then to collect
the output from the inputs you pass in. There are two convenient implementations of the input factory:
-
MainInputFactory
for processors using only the default input. -
JoinInputfactory
with thewithInput(branch, data)
method for processors using multiple inputs. The first argument is the branch name and the second argument is the data used by the branch.
If needed, you can also implement your own input representation using org.talend.sdk.component.junit.ControllableInputFactory .
|
component-runtime-testing-spark
The following artifact allows you to test against a Spark cluster:
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-runtime-testing-spark</artifactId>
<version>${talend-component.version}</version>
<scope>test</scope>
</dependency>
JUnit 4
The testing relies on a JUnit TestRule
. It is recommended to use it as a @ClassRule
, to make sure that a single instance of a Spark cluster is built. You can also use it as a simple @Rule
, to create the Spark cluster instances per method instead of per test class.
The @ClassRule
takes the Spark and Scala versions to use as parameters. It then forks a master and N slaves.
Finally, the submit*
method allows you to send jobs either from the test classpath or from a shade if you run it as an integration test.
For example:
public class SparkClusterRuleTest {
@ClassRule
public static final SparkClusterRule SPARK = new SparkClusterRule("2.10", "1.6.3", 1);
@Test
public void classpathSubmit() throws IOException {
SPARK.submitClasspath(SubmittableMain.class, getMainArgs());
// wait for the test to pass
}
}
This testing methodology works with @Parameterized . You can submit several jobs with different arguments and even combine it with Beam TestPipeline if you make it transient .
|
JUnit 5
The integration of that Spark cluster logic with JUnit 5 is done using the @WithSpark
marker for the extension. Optionally, it allows you to inject—through @SparkInject
—the BaseSpark<?>
handler to access the Spark cluster meta information. For example, its host/port.
Example:
@WithSpark
class SparkExtensionTest {
@SparkInject
private BaseSpark<?> spark;
@Test
void classpathSubmit() throws IOException {
final File out = new File(jarLocation(SparkClusterRuleTest.class).getParentFile(), "classpathSubmitJunit5.out");
if (out.exists()) {
out.delete();
}
spark.submitClasspath(SparkClusterRuleTest.SubmittableMain.class, spark.getSparkMaster(), out.getAbsolutePath());
await().atMost(5, MINUTES).until(
() -> out.exists() ? Files.readAllLines(out.toPath()).stream().collect(joining("\n")).trim() : null,
equalTo("b -> 1\na -> 1"));
}
}
Checking the job execution status
Currently, SparkClusterRule
does not allow to know when a job execution is done, even by exposing and polling the web UI URL to check. The best solution at the moment is to make sure that the output of your job exists and contains the right value.
awaitability
or any equivalent library can help you to implement such logic:
<dependency>
<groupId>org.awaitility</groupId>
<artifactId>awaitility</artifactId>
<version>3.0.0</version>
<scope>test</scope>
</dependency>
To wait until a file exists and check that its content (for example) is the expected one, you can use the following logic:
await()
.atMost(5, MINUTES)
.until(
() -> out.exists() ? Files.readAllLines(out.toPath()).stream().collect(joining("\n")).trim() : null,
equalTo("the expected content of the file"));
component-runtime-http-junit
The HTTP JUnit module allows you to mock REST API very simply. The module coordinates are:
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-runtime-http-junit</artifactId>
<version>${talend-component.version}</version>
<scope>test</scope>
</dependency>
This module uses Apache Johnzon and Netty. If you have any conflict (in particular with Netty), you can add the shaded classifier to the dependency. This way, both dependencies are shaded, which avoids conflicts with your component.
|
It supports both JUnit 4 and JUnit 5. The concept is the exact same one: the extension/rule is able to serve precomputed responses saved in the classpath.
You can plug your own ResponseLocator
to map a request to a response, but the default implementation - which should be sufficient in most cases - looks in talend/testing/http/<class name>_<method name>.json
. Note that you can also put it in talend/testing/http/<request path>.json
.
JUnit 4
JUnit 4 setup is done through two rules:
-
JUnit4HttpApi
, which is starts the server. -
JUnit4HttpApiPerMethodConfigurator
, which configures the server per test and also handles the capture mode.
If you don’t use the JUnit4HttpApiPerMethodConfigurator , the capture feature is disabled and the per test mocking is not available.
|
public class MyRESTApiTest {
@ClassRule
public static final JUnit4HttpApi API = new JUnit4HttpApi();
@Rule
public final JUnit4HttpApiPerMethodConfigurator configurator = new JUnit4HttpApiPerMethodConfigurator(API);
@Test
public void direct() throws Exception {
// ... do your requests
}
}
SSL
For tests using SSL-based services, you need to use activeSsl()
on the JUnit4HttpApi
rule.
You can access the client SSL socket factory through the API handler:
@ClassRule
public static final JUnit4HttpApi API = new JUnit4HttpApi().activeSsl();
@Test
public void test() throws Exception {
final HttpsURLConnection connection = getHttpsConnection();
connection.setSSLSocketFactory(API.getSslContext().getSocketFactory());
// ....
}
JUnit 5
JUnit 5 uses a JUnit 5 extension based on the HttpApi
annotation that you can add to your test class. You can inject the test handler - which has some utilities for advanced cases - through @HttpApiInject
:
@HttpApi
class JUnit5HttpApiTest {
@HttpApiInject
private HttpApiHandler<?> handler;
@Test
void getProxy() throws Exception {
// .... do your requests
}
}
The injection is optional and the @HttpApi annotation allows you to configure several test behaviors.
|
SSL
For tests using SSL-based services, you need to use @HttpApi(useSsl = true)
.
You can access the client SSL socket factory through the API handler:
@HttpApi*(useSsl = true)*
class MyHttpsApiTest {
@HttpApiInject
private HttpApiHandler<?> handler;
@Test
void test() throws Exception {
final HttpsURLConnection connection = getHttpsConnection();
connection.setSSLSocketFactory(handler.getSslContext().getSocketFactory());
// ....
}
}
Capturing mode
The strength of this implementation is to run a small proxy server and to auto-configure the JVM:
http[s].proxyHost
, http[s].proxyPort
, HttpsURLConnection#defaultSSLSocketFactory
and SSLContext#default
are auto-configured to work out-of-the-box with the proxy.
It allows you to keep the native and real URLs in your tests. For example, the following test is valid:
public class GoogleTest {
@ClassRule
public static final JUnit4HttpApi API = new JUnit4HttpApi();
@Rule
public final JUnit4HttpApiPerMethodConfigurator configurator = new JUnit4HttpApiPerMethodConfigurator(API);
@Test
public void google() throws Exception {
assertEquals(HttpURLConnection.HTTP_OK, get("https://google.fr?q=Talend"));
}
private int get(final String uri) throws Exception {
// do the GET request, skipped for brievity
}
}
If you execute this test, it fails with an HTTP 400 error because the proxy does not find the mocked response.
You can create it manually, as described in component-runtime-http-junit, but you can also set the talend.junit.http.capture
property to the folder storing the captures. It must be the root folder and not the folder where the JSON files are located (not prefixed by talend/testing/http
by default).
In most cases, use src/test/resources
. If new File("src/test/resources")
resolves the valid folder when executing your test (Maven default), then you can just set the system property to true
. Otherwise, you need to adjust accordingly the system property value.
When set to false , the capture is enabled. Instead, captures are saved in a false/ directory.
|
When the tests run with this system property, the testing framework creates the correct mock response files. After that, you can remove the system property. The tests will still pass, using google.com
, even if you disconnect your machine from the Internet.
Passthrough mode
If you set the talend.junit.http.passthrough
system property to true
, the server acts as a proxy and executes each request to the actual server - similarly to the capturing mode.
JUnit 5 and capture names
With its @ParameterizedTest
, you can want to customize the name of the output file for JUnit 5 based captures/mocks.
Concretely you want to ensure the replay of the same method with different data lead to different mock files.
By default the framework will use the display name of the test to specialize it but it is not always very friendly.
If you want some more advanced control over the name you can use @HttpApiName("myCapture.json")
on the test method.
To parameterize the name using @HttpApiName
, you can use the placeholders ${class}
and ${method}
which represents
the declaring class and method name, and ${displayName}
which represents the method name.
Here is an example to use the same capture file for all repeated test:
@HttpApiName("${class}_${method}")
@RepeatedTest(5)
void run() throws Exception {
// ...
}
And here, the same example but using different files for each repetition:
@HttpApiName("${class}_${method}_${displayName}")
@RepeatedTest(5)
void run() throws Exception {
// ...
}
Beam testing
If you want to make sure that your component works in Beam and don’t want to use Spark, you can try with the Direct Runner.
Check beam.apache.org/contribute/testing/ for more details.
Testing on multiple environments
JUnit (4 or 5) already provides ways to parameterize tests and execute the same "test logic" against several sets of data. However, it is not very convenient for testing multiple environments.
For example, with Beam, you can test your code against multiple runners. But it requires resolving conflicts between runner dependencies, setting the correct classloaders, and so on.
To simplify such cases, the framework provides you a multi-environment support for your tests, through the JUnit module, which works with both JUnit 4 and JUnit 5.
JUnit 4
@RunWith(MultiEnvironmentsRunner.class)
@Environment(Env1.class)
@Environment(Env2.class)
public class TheTest {
@Test
public void test1() {
// ...
}
}
The MultiEnvironmentsRunner
executes the tests for each defined environments. With the example above, it means that it runs test1
for Env1
and Env2
.
By default, the JUnit4
runner is used to execute the tests in one environment, but you can use @DelegateRunWith
to use another runner.
JUnit 5
The multi-environment configuration with JUnit 5 is similar to JUnit 4:
@Environment(EnvironmentsExtensionTest.E1.class)
@Environment(EnvironmentsExtensionTest.E2.class)
class TheTest {
@EnvironmentalTest
void test1() {
// ...
}
}
The main differences are that no runner is used because they do not exist in JUnit 5, and that you need to replace @Test
by @EnvironmentalTest
.
With JUnit5, tests are executed one after another for all environments, while tests are ran sequentially in each environments with JUnit 4. For example, this means that @BeforeAll and @AfterAll are executed once for all runners.
|
Provided environments
The provided environment sets the contextual classloader in order to load the related runner of Apache Beam.
Package: org.talend.sdk.component.junit.environment.builtin.beam
the configuration is read from system properties, environment variables, …. |
- Contextual
-
_class: ContextualEnvironment.
- Direct
-
_class: DirectRunnerEnvironment.
- Flink
-
_class: FlinkRunnerEnvironment.
- Spark
-
_class: SparkRunnerEnvironment.
Configuring environments
If the environment extends BaseEnvironmentProvider
and therefore defines an environment name - which is the case of the default ones - you can use EnvironmentConfiguration
to customize the system properties used for that environment:
@Environment(DirectRunnerEnvironment.class)
@EnvironmentConfiguration(
environment = "Direct",
systemProperties = @EnvironmentConfiguration.Property(key = "beamTestPipelineOptions", value = "..."))
@Environment(SparkRunnerEnvironment.class)
@EnvironmentConfiguration(
environment = "Spark",
systemProperties = @EnvironmentConfiguration.Property(key = "beamTestPipelineOptions", value = "..."))
@Environment(FlinkRunnerEnvironment.class)
@EnvironmentConfiguration(
environment = "Flink",
systemProperties = @EnvironmentConfiguration.Property(key = "beamTestPipelineOptions", value = "..."))
class MyBeamTest {
@EnvironmentalTest
void execute() {
// run some pipeline
}
}
If you set the <environment name>.skip system property to true , the environment-related executions are skipped.
|
Advanced usage
This usage assumes that Beam 2.4.0 or later is used.
The following dependencies bring the JUnit testing toolkit, the Beam integration and the multi-environment testing toolkit for JUnit into the test scope.
Dependencies:
<dependencies>
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-runtime-junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.shrinkwrap.resolver</groupId>
<artifactId>shrinkwrap-resolver-impl-maven</artifactId>
<version>3.1.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-runtime-beam</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
Using the fluent DSL to define jobs, you can write a test as follows:
Your job must be linear and each step must send a single value (no multi-input or multi-output). |
@Environment(ContextualEnvironment.class)
@Environment(DirectRunnerEnvironment.class)
class TheComponentTest {
@EnvironmentalTest
void testWithStandaloneAndBeamEnvironments() {
from("myfamily://in?config=xxxx")
.to("myfamily://out")
.create()
.execute();
// add asserts on the output if needed
}
}
It executes the chain twice:
-
With a standalone environment to simulate the Studio.
-
With a Beam (direct runner) environment to ensure the portability of your job.
Secrets/Passwords and Maven
You can reuse Maven settings.xml
server files, including the encrypted ones.
org.talend.sdk.component.maven.MavenDecrypter
allows yo to find a username
/password
from
a server identifier:
final MavenDecrypter decrypter = new MavenDecrypter();
final Server decrypted = decrypter.find("my-test-server");
// decrypted.getUsername();
// decrypted.getPassword();
It is very useful to avoid storing secrets and to perform tests on real systems on a continuous integration platform.
Even if you do not use Maven on the platform, you can generate the settings.xml and`settings-security.xml` files to use that feature. See maven.apache.org/guides/mini/guide-encryption.html for more details.
|
Generating data
Several data generators exist if you want to populate objects with a semantic that is more evolved than a plain random string like commons-lang3
:
Even more advanced, the following generators allow to directly bind generic data on a model. However, data quality is not always optimal:
There are two main kinds of implementation:
-
Implementations using a pattern and random generated data.
-
Implementations using a set of precomputed data extrapolated to create new values.
Check your use case to know which one fits best.
An alternative to data generation can be to import real data and use Talend Studio to sanitize the data, by removing sensitive information and replacing it with generated or anonymized data. Then you just need to inject that file into the system. |
If you are using JUnit 5, you can have a look at glytching.github.io/junit-extensions/randomBeans.
Creating a job pipeline
Job Builder
The Job builder lets you create a job pipeline programmatically using Talend components (Producers and Processors). The job pipeline is an acyclic graph, allowing you to build complex pipelines.
Let’s take a simple use case where two data sources (employee and salary) are formatted to CSV and the result is written to a file.
A job is defined based on components (nodes) and links (edges) to connect their branches together.
Every component is defined by a unique id
and an URI that identify the component.
The URI follows the form [family]://[component][?version][&configuration]
, where:
-
family is the name of the component family.
-
component is the name of the component.
-
version is the version of the component. It is represented in a key=value format. The key is
__version
and the value is a number. -
configuration is component configuration. It is represented in a key=value format. The key is the path of the configuration and the value is a `string' corresponding to the configuration value.
URI example:
job://csvFileGen?__version=1&path=/temp/result.csv&encoding=utf-8"
configuration parameters must be URI/URL encoded. |
Job example:
Job.components() (1)
.component("employee","db://input")
.component("salary", "db://input")
.component("concat", "transform://concat?separator=;")
.component("csv", "file://out?__version=2")
.connections() (2)
.from("employee").to("concat", "string1")
.from("salary").to("concat", "string2")
.from("concat").to("csv")
.build() (3)
.run(); (4)
1 | Defining all components used in the job pipeline. |
2 | Defining the connections between the components to construct the job pipeline. The links from /to use the component id and the default input/output branches.You can also connect a specific branch of a component, if it has multiple or named input/output branches, using the methods from(id, branchName) and to(id, branchName) .In the example above, the concat component has two inputs ("string1" and "string2"). |
3 | Validating the job pipeline by asserting that:
|
4 | Running the job pipeline. |
In this version, the execution of the job is linear. Components are not executed in parallel even if some steps may be independents. |
Environment/Runner
Depending on the configuration, you can select the environment which you execute your job in.
To select the environment, the logic is the following one:
-
If an
org.talend.sdk.component.runtime.manager.chain.Job.ExecutorBuilder
class is passed through the job properties, then use it. The supported types are anExecutionBuilder
instance, aClass
or aString
. -
If an
ExecutionBuilder
SPI is present, use it. It is the case ifcomponent-runtime-beam
is present in your classpath. -
Else, use a local/standalone execution.
In the case of a Beam execution, you can customize the pipeline options using system properties. They have to be prefixed with talend.beam.job.
. For example, to set the appName
option, you need to use -Dtalend.beam.job.appName=mytest
.
Key Provider
The job builder lets you set a key provider to join your data when a component has multiple inputs. The key provider can be set contextually to a component or globally to the job.
Job.components()
.component("employee","db://input")
.property(GroupKeyProvider.class.getName(),
(GroupKeyProvider) context -> context.getData().getString("id")) (1)
.component("salary", "db://input")
.component("concat", "transform://concat?separator=;")
.connections()
.from("employee").to("concat", "string1")
.from("salary").to("concat", "string2")
.build()
.property(GroupKeyProvider.class.getName(), (2)
(GroupKeyProvider) context -> context.getData().getString("employee_id"))
.run();
1 | Defining a key provider for the data produced by the employee component. |
2 | Defining a key provider for all data manipulated in the job. |
If the incoming data has different IDs, you can provide a complex global key provider that relies on the context given by the component id
and the branch name
.
GroupKeyProvider keyProvider = context -> {
if ("employee".equals(context.getComponentId())) {
return context.getData().getString("id");
}
return context.getData().getString("employee_id");
};
Beam case
For Beam case, you need to rely on Beam pipeline definition and use the component-runtime-beam
dependency, which provides Beam bridges.
Inputs and Outputs
org.talend.sdk.component.runtime.beam.TalendIO
provides a way to convert a partition mapper or a processor to an input or processor using the read
or write
methods.
public class Main {
public static void main(final String[] args) {
final ComponentManager manager = ComponentManager.instance()
Pipeline pipeline = Pipeline.create();
//Create beam input from mapper and apply input to pipeline
pipeline.apply(TalendIO.read(manager.findMapper(manager.findMapper("sample", "reader", 1, new HashMap<String, String>() {{
put("fileprefix", "input");
}}).get()))
.apply(new ViewsMappingTransform(emptyMap(), "sample")) // prepare it for the output record format (see next part)
//Create beam processor from talend processor and apply to pipeline
.apply(TalendIO.write(manager.findProcessor("test", "writer", 1, new HashMap<String, String>() {{
put("fileprefix", "output");
}}).get(), emptyMap()));
//... run pipeline
}
}
Processors
org.talend.sdk.component.runtime.beam.TalendFn
provides the way to wrap a processor in a Beam PTransform
and to integrate it into the pipeline.
public class Main {
public static void main(final String[] args) {
//Component manager and pipeline initialization...
//Create beam PTransform from processor and apply input to pipeline
pipeline.apply(TalendFn.asFn(manager.findProcessor("sample", "mapper", 1, emptyMap())).get())), emptyMap());
//... run pipeline
}
}
The multiple inputs and outputs are represented by a Map
element in Beam case to avoid using multiple inputs and outputs.
You can use ViewsMappingTransform or CoGroupByKeyResultMappingTransform to adapt the input/output format to the record format representing the multiple inputs/output, like Map<String, List<?>> , but materialized as a Record . Input data must be of the Record type in this case.
|
Converting a Beam.io into a component I/O
For simple inputs and outputs, you can get an automatic and transparent conversion of the Beam.io into an I/O component, if you decorated your PTransform
with @PartitionMapper
or @Processor
.
However, there are limitations:
-
Inputs must implement
PTransform<PBegin, PCollection<?>>
and must be aBoundedSource
. -
Outputs must implement
PTransform<PCollection<?>, PDone>
and register aDoFn
on the inputPCollection
.
For more information, see the How to wrap a Beam I/O page.
Defining services
Services are configurations that can be reused across several classes. Talend Component Kit comes with a predefined set of services that you can easily use.
You can still define your own services under the service node of your component project. By default, the Component Kit Starter generates a dedicated class in your project in which you can implement services.
Built-in services
The framework provides built-in services that you can inject by type in components and actions.
Lisf of built-in services
Type | Description |
---|---|
|
Provides a small abstraction to cache data that does not need to be recomputed very often. Commonly used by actions for UI interactions. |
|
Allows to resolve a dependency from its Maven coordinates. It can either try to resolve a local file or (better) creates for you a preinitialized classloader. |
|
A JSON-B instance. If your model is static and you don’t want to handle the serialization manually using JSON-P, you can inject that instance. |
|
A JSON-P instance. Prefer other JSON-P instances if you don’t exactly know why you use this one. |
|
A JSON-P instance. It is recommended to use this one instead of a custom one to optimize memory usage and speed. |
|
A JSON-P instance. It is recommended to use this one instead of a custom one to optimize memory usage and speed. |
|
A JSON-P instance. It is recommended to use this one instead of a custom one to optimize memory usage and speed. |
|
A JSON-P instance. It is recommended to use this one instead of a custom one to optimize memory usage and speed. |
|
A JSON-P instance. It is recommended to use this one instead of a custom one to optimize memory usage and speed. |
|
Allows to resolve files from Maven coordinates (like |
|
Utility to inject services in fields marked with |
|
Allows to instantiate an object from its class name and properties. |
|
Allows to instantiate a record. |
|
Allows to instantiate a |
|
Some utilities to create records from another one. It is typically what is used when you want to add an entry in a record and passthrough the other ones. It also provides a nice |
|
Represents the local configuration that can be used during the design. It is not recommended to use it for the runtime because the local configuration is usually different and the instances are distinct. You can also use the local cache as an interceptor with |
Every interface that extends |
Lets you define an HTTP client in a declarative manner using an annotated interface. See the Using HttpClient for more details. |
All these injected services are serializable, which is important for big data environments. If you create the instances yourself, you cannot benefit from these features, nor from the memory optimization done by the runtime. Prefer reusing the framework instances over custom ones. |
LocalConfiguration
The local configuration uses system properties and the environment (replacing dots per underscores) to look up the values.
You can also put a TALEND-INF/local-configuration.properties
file with default values. This allows to use the local_configuration:<key>
syntax in @Ui
annotation. Here is an example to read the default value of a property from the configuration:
@Option
@DefaultValue("local_configuration:myfamily.model.key")
private String value;
Ensure your key is unique across all components to avoid global overrides on the JVM. In practice, it is strongly recommended to always use the family as a prefix. Also note that you can use @Configuration("prefix") to inject a mapping of the LocalConfiguration in a component. It uses the same rules as for any configuration object.
If you prefer to inject you configuration in a service, ensure to wrap it in a Supplier to always have
an up to date version.
|
If you want to ignore the local-configuration.properties
, you can set the system property: talend.component.configuration.${componentPluginId}.ignoreLocalConfiguration=true
.
Here a sample @Configuration
model:
@Data // from lombok, optional
public class MyConfig {
@Option
private String defaultUrl;
}
Here is how to use it from a service:
@Service
public class ConfiguredService {
@Configuration("myprefix")
private Supplier<MyConfig> config;
}
And finally, here is how to use it in a component:
@Service
public class ConfiguredComponent {
public ConfiguredComponent(@Configuration("myprefix") final MyConfig config) {
// ...
}
}
it is recommended to convert this configuration in a runtime model in components to avoid to transport more than desired during the job distribution. |
Using HttpClient
You can access the API reference in the Javadocs.
The HttpClient usage is described in this section by using the REST API example below. Assuming that it requires a basic authentication header:
GET |
- |
POST |
JSON payload to be created: |
To create an HTTP client that is able to consume the REST API above, you need to define an interface that extends HttpClient
.
The HttpClient
interface lets you set the base
for the HTTP address that the client will hit.
The base
is the part of the address that needs to be added to the request path to hit the API. It is now possible, and recommended, to use @Base annotation.
Every method annotated with @Request
in the interface defines an HTTP request.
Every request can have a @Codec
parameter that allows to encode or decode the request/response payloads.
You can ignore the encoding/decoding for String and Void payloads.
|
public interface APIClient extends HttpClient {
@Request(path = "api/records/{id}", method = "GET")
@Codec(decoder = RecordDecoder.class) //decoder = decode returned data to Record class
Record getRecord(@Header("Authorization") String basicAuth, @Path("id") int id);
/** same with base as parameter */
@Request(path = "api/records/{id}", method = "GET")
@Codec(decoder = RecordDecoder.class) //decoder = decode returned data to Record class
Record getRecord(@Header("Authorization") String basicAuth, @Base String base, @Path("id") int id);
@Request(path = "api/records", method = "POST")
@Codec(encoder = RecordEncoder.class, decoder = RecordDecoder.class) //encoder = encode record to fit request format (json in this example)
Record createRecord(@Header("Authorization") String basicAuth, Record record);
}
The interface should extend HttpClient .
|
In the codec classes (that implement Encoder/Decoder), you can inject any of your service annotated with @Service
or @Internationalized
into the constructor.
Internationalization services can be useful to have internationalized messages for errors handling.
The interface can be injected into component classes or services to consume the defined API.
@Service
public class MyService {
private APIClient client;
public MyService(...,APIClient client){
//...
this.client = client;
client.base("http://localhost:8080");// init the base of the api, often in a PostConstruct or init method
}
//...
// Our get request
Record rec = client.getRecord("Basic MLFKG?VKFJ", 100);
// or
Record rec1 = client.getRecord("Basic MLFKG?VKFJ", "http://localhost:8080", 100);
//...
// Our post request
Record newRecord = client.createRecord("Basic MLFKG?VKFJ", new Record());
}
By default, /+json are mapped to JSON-P and /+xml to JAX-B if the model has a @XmlRootElement annotation.
|
Customizing HTTP client requests
For advanced cases, you can customize the Connection
by directly using @UseConfigurer
on the method. It calls your custom instance of Configurer
. Note that you can use @ConfigurerOption
in the method signature to pass some Configurer
configurations.
For example, if you have the following Configurer
:
public class BasicConfigurer implements Configurer {
@Override
public void configure(final Connection connection, final ConfigurerConfiguration configuration) {
final String user = configuration.get("username", String.class);
final String pwd = configuration.get("password", String.class);
connection.withHeader(
"Authorization",
Base64.getEncoder().encodeToString((user + ':' + pwd).getBytes(StandardCharsets.UTF_8)));
}
}
You can then set it on a method to automatically add the basic header with this kind of API usage:
public interface APIClient extends HttpClient {
@Request(path = "...")
@UseConfigurer(BasicConfigurer.class)
Record findRecord(@ConfigurerOption("username") String user, @ConfigurerOption("password") String pwd);
}
Built-In configurer
The framework provides in the component-api
an OAuth1.Configurer
which can be used as an example
of configurer implementation. It expects a single OAuth1.Configuration
parameter to be passed
to the request as a @ConfigurationOption
.
Here is a sample showing how it can be used:
public interface OAuth1Client extends HttpClient {
@Request(path = "/oauth1")
@UseConfigurer(OAuth1.Configurer.class)
String get(@ConfigurerOption("oauth1") final OAuth1.Configuration configuration);
}
Big data streams
By default, the client loads in memory the payload. In case of big payloads, it can consume too much memory.
For these cases, you can get the payload as an InputStream
:
public interface APIClient extends HttpClient {
@Request(path = "/big/http/data")
InputStream getData();
}
You can use the Response wrapper, or not.
|
Internationalizing services
Internationalization requires following several best practices:
-
Storing messages using
ResourceBundle
properties file in your component module. -
The location of the properties is in the same package than the related components and is named
Messages
. For example,org.talend.demo.MyComponent
usesorg.talend.demo.Messages[locale].properties
. -
Use the internationalization API for your own messages.
Internationalization API
The Internationalization API is the mechanism to use to internationalize your own messages in your own components.
The principle of the API is to design messages as methods returning String
values and get back a template using a ResourceBundle
named Messages
and located in the same package than the interface that defines these methods.
To ensure your internationalization API is identified, you need to mark it with the @Internationalized
annotation:
package org.superbiz;
@Internationalized (1)
public interface Translator {
String message();
String templatizedMessage(String arg0, int arg1); (2)
String localized(String arg0, @Language Locale locale); (3)
String localized(String arg0, @Language String locale); (4)
}
1 | @Internationalized allows to mark a class as an internationalized service. |
2 | You can pass parameters. The message uses the MessageFormat syntax to be resolved, based on the ResourceBundle template. |
3 | You can use @Language on a Locale parameter to specify manually the locale to use. Note that a single value is used (the first parameter tagged as such). |
4 | @Language also supports the String type. |
The corresponding Messages.properties
placed in the org/superbiz
resource folder contains the following:
org.superbiz.Translator.message = Some message
org.superbiz.Translator.templatizedMessage = Some message with string {0} and with number {1}
org.superbiz.Translator.localized = Some other message with string {0}
# or the short version
Translator.message = Some message
Translator.templatizedMessage = Some message with string {0} and with number {1}
Translator.localized = Some other message with string {0}
Providing actions for consumers
In some cases you can need to add some actions that are not related to the runtime. For example, enabling users of the plugin/library to test if a connection works properly.
To do so, you need to define an @Action
, which is a method with a name (representing the event name), in a class decorated with @Service
:
@Service
public class MyDbTester {
@Action(family = "mycomp", "test")
public Status doTest(final IncomingData data) {
return ...;
}
}
Services are singleton. If you need some thread safety, make sure that they match that requirement. Services should not store any status either because they can be serialized at any time. Status are held by the component. |
Services can be used in components as well (matched by type). They allow to reuse some shared logic, like a client. Here is a sample with a service used to access files:
@Emitter(family = "sample", name = "reader")
public class PersonReader implements Serializable {
// attributes skipped to be concise
public PersonReader(@Option("file") final File file,
final FileService service) {
this.file = file;
this.service = service;
}
// use the service
@PostConstruct
public void open() throws FileNotFoundException {
reader = service.createInput(file);
}
}
The service is automatically passed to the constructor. It can be used as a bean. In that case, it is only necessary to call the service method.
Particular action types
Some common actions need a clear contract so they are defined as API first-class citizen. For example, this is the case for wizards or health checks. Here is the list of the available actions:
Close Connection
Mark an action works for closing runtime connection, returning a close helper object which do real close action. The functionality is for the Studio only, studio will use the close object to close connection for existed connection, and no effect for cloud platform.
-
Type:
close_connection
-
API:
@org.talend.sdk.component.api.service.connection.CloseConnection
-
Returned type:
org.talend.sdk.component.api.service.connection.CloseConnectionObject
-
Sample:
{
"connection": "..."
}
Create Connection
Mark an action works for creating runtime connection, returning a runtime connection object like jdbc connection if database family. Its parameter MUST be a datastore. Datastore is configuration type annotated with @DataStore. The functionality is for the Studio only, studio will use the runtime connection object when use existed connection, and no effect for cloud platform.
-
Type:
create_connection
-
API:
@org.talend.sdk.component.api.service.connection.CreateConnection
Discoverdataset
This class marks an action that explore a connection to retrieve potential datasets.
-
Type:
discoverdataset
-
API:
@org.talend.sdk.component.api.service.discovery.DiscoverDataset
-
Returned type:
org.talend.sdk.component.api.service.discovery.DiscoverDatasetResult
-
Sample:
{
"datasetDescriptionList": "..."
}
Dynamic Values
Mark a method as being useful to fill potential values of a string option for a property denoted by its value. You can link a field as being completable using @Proposable(value). The resolution of the completion action is then done through the component family and value of the action. The callback doesn’t take any parameter.
-
Type:
dynamic_values
-
API:
@org.talend.sdk.component.api.service.completion.DynamicValues
-
Returned type:
org.talend.sdk.component.api.service.completion.Values
-
Sample:
{
"items":[
{
"id":"value",
"label":"label"
}
]
}
Healthcheck
This class marks an action doing a connection test
-
Type:
healthcheck
-
API:
@org.talend.sdk.component.api.service.healthcheck.HealthCheck
-
Returned type:
org.talend.sdk.component.api.service.healthcheck.HealthCheckStatus
-
Sample:
{
"comment":"Something went wrong",
"status":"KO"
}
Schema
Mark an action as returning a discovered schema. Its parameter MUST be a dataset. Dataset is configuration type annotated with @DataSet. If component has multiple datasets, then dataset used as action parameter should have the same identifier as this @DiscoverSchema.
-
Type:
schema
-
API:
@org.talend.sdk.component.api.service.schema.DiscoverSchema
-
Returned type:
org.talend.sdk.component.api.record.Schema
-
Sample:
{
"entries":[
{
"comment":"The column 1",
"metadata":false,
"name":"column1",
"nullable":false,
"props":{
},
"rawName":"column 1",
"type":"STRING"
},
{
"comment":"The int column",
"metadata":false,
"name":"column2",
"nullable":false,
"props":{
},
"rawName":"column 2",
"type":"INT"
}
],
"metadata":[
],
"props":{
},
"type":"RECORD"
}
Suggestions
Mark a method as being useful to fill potential values of a string option. You can link a field as being completable using @Suggestable(value). The resolution of the completion action is then done when the user requests it (generally by clicking on a button or entering the field depending the environment).
-
Type:
suggestions
-
API:
@org.talend.sdk.component.api.service.completion.Suggestions
-
Returned type:
org.talend.sdk.component.api.service.completion.SuggestionValues
-
Sample:
{
"cacheable":false,
"items":[
{
"id":"value",
"label":"label"
}
]
}
Update
This class marks an action returning a new instance replacing part of a form/configuration.
-
Type:
update
-
API:
@org.talend.sdk.component.api.service.update.Update
User
Extension point for custom UI integrations and custom actions.
-
Type:
user
-
API:
@org.talend.sdk.component.api.service.Action
Validation
Mark a method as being used to validate a configuration.
this is a server validation so only use it if you can’t use other client side validation to implement it. |
-
Type:
validation
-
API:
@org.talend.sdk.component.api.service.asyncvalidation.AsyncValidation
-
Returned type:
org.talend.sdk.component.api.service.asyncvalidation.ValidationResult
-
Sample:
{
"comment":"Something went wrong",
"status":"KO"
}
built_in_suggestable
Mark the decorated field as supporting suggestions, i.e. dynamically get a list of valid values the user can use. It is however different from @Suggestable
by looking up the implementation in the current application and not the services. Finally, it is important to note that it can do nothing in some environments too and that there is no guarantee the specified action is supported.
-
API:
@org.talend.sdk.component.api.configuration.action.BuiltInSuggestable
Internationalization
Internationalization is supported through the injection of the $lang
parameter, which allows you to get the correct locale to use with an @Internationalized
service:
public SuggestionValues findSuggestions(@Option("someParameter") final String param,
@Option("$lang") final String lang) {
return ...;
}
You can combine the $lang option with the @Internationalized and @Language parameters.
|
Services and interceptors
For common concerns such as caching, auditing, and so on, you can use an interceptor-like API. It is enabled on services by the framework.
An interceptor defines an annotation marked with @Intercepts
, which defines the implementation of the interceptor (InterceptorHandler
).
For example:
@Intercepts(LoggingHandler.class)
@Target({ TYPE, METHOD })
@Retention(RUNTIME)
public @interface Logged {
String value();
}
The handler is created from its constructor and can take service injections (by type). The first parameter, however, can be BiFunction<Method, Object[], Object>
, which represents the invocation chain if your interceptor can be used with others.
If you make a generic interceptor, pass the invoker as first parameter. Otherwise you cannot combine interceptors at all. |
Here is an example of interceptor implementation for the @Logged
API:
public class LoggingHandler implements InterceptorHandler {
// injected
private final BiFunction<Method, Object[], Object> invoker;
private final SomeService service;
// internal
private final ConcurrentMap<Method, String> loggerNames = new ConcurrentHashMap<>();
public CacheHandler(final BiFunction<Method, Object[], Object> invoker, final SomeService service) {
this.invoker = invoker;
this.service = service;
}
@Override
public Object invoke(final Method method, final Object[] args) {
final String name = loggerNames.computeIfAbsent(method, m -> findAnnotation(m, Logged.class).get().value());
service.getLogger(name).info("Invoking {}", method.getName());
return invoker.apply(method, args);
}
}
This implementation is compatible with interceptor chains because it takes the invoker as first constructor parameter and it also takes a service injection. Then, the implementation simply does what is needed, which is logging the invoked method in this case.
The findAnnotation annotation, inherited from InterceptorHandler , is an utility method to find an annotation on a method or class (in this order).
|
Defining a custom API
It is possible to extend the Component API for custom front features.
What is important here is to keep in mind that you should do it only if it targets not portable components (only used by the Studio or Beam).
It is recommended to create a custom xxxx-component-api
module with the new set of annotations.
Integrating components into Talend Studio
To be able to see and use your newly developed components, you need to integrate them to the right application.
Currently, you can deploy your components to Talend Studio as part of your development process to iterate on them:
You can also share your components externally and install them using a component archive (.car) file.
Check the versions of the framework that are compatible with your version of Talend Studio in this document.
If you were used to create custom components with the Javajet framework and want to get to know the new approach and main differences of the Component Kit framework, refer to this document.
Version compatibility
You can integrate and start using components developed using Talend Component Kit in Talend applications very easily.
As both the development framework and Talend applications evolve over time, you need to ensure compatibility between the components you develop and the versions of Talend applications that you are targeting, by making sure that you use the right version of Talend Component Kit.
Compatibility matrix
The version of Talend Component Kit you need to use to develop new components depends on the versions of the Talend applications in which these components will be integrated.
Talend product | Talend Component Kit version |
---|---|
Talend Studio 7.3.1 |
Framework until 1.1.15 |
Talend Studio 7.2.1 |
Framework until 1.1.10 |
Talend Studio 7.1.1 |
Framework until 1.1.1 |
Talend Studio 7.0.1 |
Framework until 0.0.5 |
Talend Cloud |
Framework from 1.1.x |
More recent versions of Talend Component Kit contain many fixes, improvements and features that help developing your components. However, they can cause some compatibility issues when deploying these components to older/different versions of Talend Studio and Talend Cloud. Choose the version of Talend Component Kit that best fits your needs.
Changing the Talend Component Kit version of your project
Creating a project using the Component Kit Starter always uses the latest release of Talend Component Kit.
However, you can manually change the version of Talend Component Kit directly in the generated project.
-
Go to your IDE and access the project root .pom file.
-
Look for the
org.talend.sdk.component
dependency nodes. -
Replace the version in the relevant nodes with the version that you need to use for your project.
You can use a Snapshot of the version under development using the -SNAPSHOT version and Sonatype snapshot repository.
|
Iterating on component development with Talend Studio
Integrate components you developed using Talend Component Kit to Talend Studio in a few steps. Also learn how to enable the developer and debugging modes to iterate on your component development.
Version compatibility
The version of Talend Component Kit you need to use to develop new components depends on the version of Talend Studio in which components will be integrated.
Refer to this document to learn about compatibility between Talend Component Kit and the different versions of Talend applications.
Installing the components
Learn how to build and deploy components to Talend Studio using Maven or Gradle Talend Component Kit plugins.
This can be done using the deploy-in-studio
goal from your development environment.
If you are unfamiliar with component development, you can also follow this example to go through the entire process, from creating a project to using your new component in Talend Studio.
Configuring the component server
The Studio integration relies on the Component Server, that the Studio uses to gather data about components created using Talend Component Kit.
You can change the default configuration of component server by modifying the $STUDIO_HOME/configuration/config.ini
file.
The following parameters are available:
Name | Description | Default |
---|---|---|
component.environment |
Enables the developer mode when set to |
- |
component.debounce.timeout |
Specifies the timeout (in milliseconds) before calling listeners in components Text fields |
750 |
component.kit.skip |
If set to |
false |
component.java.arguments |
Component server additional options |
- |
component.java.m2 |
Maven repository that the server uses to resolve components |
Defaults to the global Studio configuration |
component.java.coordinates |
A list of comma-separated GAV (groupId:artifactId:version) of components to register |
- |
component.java.registry |
A properties file with values matching component GAV (groupId:artifactId:version) registered at startup. Only use slashes (even on windows) in the path. |
- |
component.java.port |
Sets the port to use for the server |
random |
components.server.beam.active |
Active, if set to true, Beam support (Experimental). It requires Beam SDK Java core dependencies to be available. |
false |
component.server.jul.forceConsole |
Adds a console handler to JUL to see logs in the console. This can be helpful in development because the formatting is clearer than the OSGi one in It uses the |
false |
Here is an example of a common developer configuration/config.ini
file:
# use local .m2 instead of embedded studio one
maven.repository = global
# during development, see developer model part
component.environment = dev
# log into the console the component interactions - optional
component.server.jul.forceConsole = true
java.util.logging.SimpleFormatter.format = [%4$s] %5$s%6$s%n
Enabling the developer mode
The developer mode is especially useful to iterate on your component development and to avoid closing and restarting Talend Studio every time you make a change to a component. It adds a Talend Component Kit button in the main toolbar:
When clicking this button, all components developed with the Talend Component Kit framework are reloaded. The cache is invalidated and the components refreshed.
You still need to add and remove the components to see the changes. |
To enable it, simply set the component.environment
parameter to dev
in the config.ini
configuration file of the component server.
Debugging your custom component in Talend Studio
Several methods allow you to debug custom components created with Talend Component Kit in Talend Studio.
Debugging the runtime or the Guess schema option of a component
-
From your development tool, create a new Remote configuration, and copy the Command line arguments for running remote JVM field. For example,
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
, where:-
the suspend parameter of the -agentlib argument specifies whether you want to suspend the debugged JVM until the debugger attaches to it. Possible values are
n
(no, default value) ory
(yes). -
the address parameter of the -agentlib argument is the port used for the remote configuration. Make sure this port is available.
-
-
Open Talend Studio.
-
Create a new Job that uses the component you want to debug or open an existing one that already uses it.
-
Go to the Run tab of the Job and select Use specific JVM arguments.
-
Click New to add an argument.
-
In the popup window, paste the arguments copied from the IDE.
-
Enter the corresponding debug mode:
-
To debug the runtime, run the Job and access the remote host configured in the IDE.
-
To debug the Guess schema option, click the Guess schema action button of the component and access the remote host configured in the IDE.
-
Debugging UI actions and validations
-
From your development tool, create a new Remote configuration, and copy the Command line arguments for running remote JVM field. For example,
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
, where: -
Access the installation directory of your Talend Sutdio.
-
Open the
.ini
file corresponding to your Operating System. For example,TOS_DI-win-x86_64.ini
. -
Paste the arguments copied from the IDE in a new line of the file.
-
Go to Talend Studio to use the component, and access the host host configured in the IDE.
Random port when running concurrent studio instances
If you run multiple Studio instances automatically in parallel, you can run into some issues with the random port computation. For example on a CI platform. For that purpose, you can create the $HOME/.talend/locks/org.talend.sdk.component.studio-integration.lock
file.
Then, when a server starts, it acquires a lock on that file and prevents another server to get a port until it is started. It ensures that you can’t have two concurrent processes getting the same port allocated.
However, it is highly unlikely to happen on a desktop. In that case, forcing a different value through component.java.port
in your config.ini
file is a better solution for local installations.
Installing components using a CAR file
Components built using Talend Component Kit can be shared as component archives (.car). These CAR files are executable files allowing to easily deploy the components it contains to any compatible version of Talend Studio.
Component developers can generate .car files from their projects to share their components and make them available for other users, as detailed in this document.
This document assumes that you have a component archive (.car) file and need to deploy it to Talend Studio.
Deploying from the CAR file to Talend Studio
The component archive (.car) is executable and exposes the studio-deploy
command which takes a Talend Studio home path as parameter. When executed, it installs the dependencies into the Studio and registers the component in your instance. For example:
# for a studio
java -jar mycomponent.car studio-deploy /path/to/my/studio
or
java -jar mycomponent.car studio-deploy --location /path/to/my/studio
# for a m2 provisioning
java -jar mycomponent.car maven-deploy /path/to/.m2/repository
or
java -jar mycomponent.car maven-deploy --location /path/to/.m2/repository
You can also upload the dependencies to your Nexus server using the following command:
java -jar mycomponent.car deploy-to-nexus --url <nexus url> --repo <repository name> --user <username> --pass <password> --threads <parallel threads number> --dir <temp directory>
In this command, Nexus URL and repository name are mandatory arguments. All other arguments are optional. If arguments contain spaces or special symbols, you need to quote the whole value of the argument. For example:
--pass "Y0u will \ not G4iess i' ^"
Deploying a component archive to a remote project from Talend Studio
Talend Studio allows you to share components you have created using Talend Component Kit to other users working on the same remote project.
Remote projects are available with Enterprise versions of Talend Studio only. Also, note that this feature has been removed in Studio since 7.3 release. |
Make sure you are connected to a remote project and the artifact repository for component sharing has been properly configured.
-
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open the Project Settings dialog box.
-
In the tree view of the dialog box, select Repository Share to open the corresponding view.
-
Select the Propagate components update to Artifact Repository check box.
-
In the Repository ID field, specify the artifact repository configured for component sharing, and then click Check connection to verify the connectivity.
-
Click Apply and Close to validate the settings and close the dialog box.
-
Create a folder named
patches
at the root of your Talend Studio installation directory, then copy the .car files of the components you want share to this folder. -
Restart your Talend Studio and connect to the remote project.
The components are deployed automatically to the repository and available in the Palette for other users when connected to a remote project with the same sharing repository configuration.
Troubleshooting
My custom component builds correctly but does not appear in Talend Studio, how to fix it? This issue can be caused by the icon specified in the component metadata.
-
Make sure to specify a custom icon for the component and the component family.
-
These custom icons must be in PNG format to be properly handled by Talend Studio.
-
Remove SVG parameters from the
talend.component.server.icon.paths
property in the HTTP server configuration. Refer to this section.
Learn more about defining custom icons for components in this document.
From Javajet to Talend Component Kit
From the version 7.0 of Talend Studio, Talend Component Kit becomes the recommended framework to use to develop components.
This framework is being introduced to ensure that newly developed components can be deployed and executed both in on-premise/local and cloud/big data environments.
From that new approach comes the need to provide a complete yet unique and compatible way of developing components.
With the Component Kit, custom components are entirely implemented in Java. To help you get started with a new custom component development project, a Starter is available. Using it, you will be able to generate the skeleton of your project. By importing this skeleton in a development tool, you can then implement the components layout and execution logic in Java.
Defining the component configuration
With the previous Javajet framework, metadata, widgets and configurable parts of a custom component were specified in XML.
With the Component Kit, they are now defined in the <component_name><component_type>Configuration
(for example, LoggerProcessorConfiguration
) Java class of your development project.
Note that most of this configuration is transparent if you specified the Configuration Model of your components right before generating the project from the Starter.
Any undocumented feature or option is considered not supported by the Component Kit framework. |
You can find examples of output in Studio or Cloud environments in the Gallery.
Widgets
Input/Text
Javajet
<PARAMETER
NAME="CONFIG"
FIELD="TEXT"
NUM_ROW="10">
<DEFAULT>""</DEFAULT>
</PARAMETER>
Component Kit
@Option
String config;
Password
Javajet
<PARAMETER
NAME="PASSWORD"
FIELD="PASSWORD"
NUM_ROW="10"
REQUIRED="true">
Component Kit
@Option
@Credential
String password;
Textarea
Javajet
<PARAMETER NAME="QUERY"
FIELD="MEMO"
NUM_ROW="1">
<DEFAULT>""</DEFAULT>
</PARAMETER>
Component Kit
@Option
@Textarea
String query;
Integer
Javajet
<!-- There were no specific widget for number fields -->
<PARAMETER
NAME="CONFIG"
FIELD="TEXT"
NUM_ROW="10">
<DEFAULT>""</DEFAULT>
</PARAMETER>
Component Kit
@Option
@Documentation("This is a number")
public Integer number;
Checkbox
Javajet
<PARAMETER
NAME="PRETTY_FORMAT"
FIELD="CHECK"
NUM_ROW="10">
<DEFAULT>false</DEFAULT>
</PARAMETER>
Component Kit
@Option
Boolean pretty_format;
List
Javajet
<PARAMETER
NAME="ACTION"
FIELD="CLOSED_LIST"
NUM_ROW="10">
<ITEMS DEFAULT="1">
<ITEM NAME="DELETE" VALUE="1" />
<ITEM NAME="INSERT" VALUE="2" />
<ITEM NAME="UPDATE" VALUE="3" />
</ITEMS>
</PARAMETER>
Component Kit
@Option
@Proposable("valuesProvider")
String action;
/** service class */
@DynamicValues("valuesProvider")
public Values actions(){
return new Values(asList(new Values.Item("1", "Delete"),
new Values.Item("2", "Insert"),
new Values.Item("3", "Update")));
}
or
Component Kit
@Option
ActionEnum action;
/** Define enum */
enum ActionEnum {
Delete,
Insert,
Update
}
Suggestions
Javajet
<!-- There were no simple way to load proposals from service in javajet -->
Component Kit
@Option
@Suggestable(value = "loadModules", parameters = { "myconfig" })
@Documentation("module names are loaded using service")
public String moduleName;
// In Service class
@Suggestions("loadModules")
public SuggestionValues loadModules(@Option final MyConfig myconfig) { }
Table
Javajet
<!-- There were no simple way to select complex objects in javajet -->
Component Kit
@Option
List<MyObject> config;
Validations
Property validation
Javajet
<!-- There were no url pattern validation in javajet -->
Component Kit
/** configuration class */
@Option
@Validable("url")
String config;
/** service class */
@AsyncValidation("url")
ValidationResult doValidate(String url) {
//validate the property
}
Binding properties
ActiveIf
Javajet
<PARAMETER
NAME="AUTH_TYPE"
FIELD="CLOSED_LIST"
NUM_ROW="10">
<ITEMS DEFAULT="NOAUTH">
<ITEM NAME="NOAUTH" VALUE="NOAUTH" />
<ITEM NAME="BASIC" VALUE="BASIC" />
<ITEM NAME="BASIC" VALUE="OAUTH2" />
</ITEMS>
</PARAMETER>
<PARAMETER
NAME="LOGIN"
FIELD="TEXT"
NUM_ROW="20"
SHOW_IF="AUTH_TYPE == 'BASIC'">
<DEFAULT>"login"</DEFAULT>
</PARAMETER>
<PARAMETER
NAME="LOGIN"
FIELD="PASSWORD"
NUM_ROW="20"
SHOW_IF="AUTH_TYPE='BASIC'">
<DEFAULT>"login"</DEFAULT>
</PARAMETER>
Component Kit
enum AuthorizationType {
NoAuth,
Basic,
oauth2
}
@Option
@Required
@Documentation("")
private AuthorizationType type = AuthorizationType.NoAuth;
@Option
@required
@ActiveIf(target = "type", value = "Basic")
@Documentation("Username for the basic authentication")
private String login;
@Option
@required
@credential
@ActiveIf(target = "type", value = "Basic")
@Documentation("password for the basic authentication")
private String password;
After Variables
Javajet
<RETURN NAME="NAME_1_OF_AFTER_VARIABLE" TYPE="id_Integer" AVAILABILITY="AFTER"/>
<RETURN NAME="NAME_2_OF_AFTER_VARIABLE" TYPE="id_String" AVAILABILITY="AFTER"/>
Component Kit
import org.talend.sdk.component.api.component.AfterVariables.AfterVariableContainer;
import org.talend.sdk.component.api.component.AfterVariables.AfterVariable;
/**
* Possible types:
* Boolean.class, Byte.class, byte[].class, Character.class, Date.class, Double.class, Float.class,
* BigDecimal.class, Integer.class, Long.class, Object.class, Short.class, String.class, List.class
*/
@AfterVariable(value = "NAME_1_OF_AFTER_VARIABLE", description = "Some description", type = Integer.class)
@AfterVariable(value = "NAME_2_OF_AFTER_VARIABLE", description = "Custom variable description", type = String.class)
class Emitter {
@AfterVariableContainer
public Map<String, Object> afterVariables() {
// .. code
}
}
or
import org.talend.sdk.component.api.component.AfterVariables.AfterVariableContainer;
import org.talend.sdk.component.api.component.AfterVariables.AfterVariable;
import org.talend.sdk.component.api.component.AfterVariables;
@AfterVariables({
@AfterVariable(value = "NAME_1_OF_AFTER_VARIABLE", description = "Some description", type = Integer.class),
@AfterVariable(value = "NAME_2_OF_AFTER_VARIABLE", description = "Custom variable description", type = String.class)
})
class Emitter {
@AfterVariableContainer
public Map<String, Object> afterVariables() {
// .. code
}
}
Defining the runtime
Previously, the execution of a custom component was described through several Javajet files:
-
<component_name>_begin.javajet, containing the code required to initialize the component.
-
<component_name>_main.javajet, containing the code required to process each line of the incoming data.
-
<component_name>_end.javajet, containing the code required to end the processing and go to the following step of the execution.
With the Component Kit, the entire execution flow of a component is described through its main Java class <component_name><component_type>
(for example, LoggerProcessor
) and through services for reusable parts.
Component execution logic
Each type of component has its own execution logic. The same basic logic is applied to all components of the same type, and is then extended to implement each component specificities. The project generated from the starter already contains the basic logic for each component.
Talend Component Kit framework relies on several primitive components.
All components can use @PostConstruct
and @PreDestroy
annotations to initialize or release some underlying resource at the beginning and the end of a processing.
In distributed environments, class constructor are called on cluster manager nodes. Methods annotated with @PostConstruct and @PreDestroy are called on worker nodes. Thus, partition plan computation and pipeline tasks are performed on different nodes.
|
1 | The created task is a JAR file containing class information, which describes the pipeline (flow) that should be processed in cluster. |
2 | During the partition plan computation step, the pipeline is analyzed and split into stages. The cluster manager node instantiates mappers/processors, gets estimated data size using mappers, and splits created mappers according to the estimated data size. All instances are then serialized and sent to the worker node. |
3 | Serialized instances are received and deserialized. Methods annotated with @PostConstruct are called. After that, pipeline execution starts. The @BeforeGroup annotated method of the processor is called before processing the first element in chunk.After processing the number of records estimated as chunk size, the @AfterGroup annotated method of the processor is called. Chunk size is calculated depending on the environment the pipeline is processed by. Once the pipeline is processed, methods annotated with @PreDestroy are called. |
All the methods managed by the framework must be public. Private methods are ignored. |
The framework is designed to be as declarative as possible but also to stay extensible by not using fixed interfaces or method signatures. This allows to incrementally add new features of the underlying implementations. |
Main changes
To ensure that the Cloud-compatible approach of the Component Kit framework is respected, some changes were introduced on the implementation side, including:
-
The File mode is no longer supported. You can still work with URIs and remote storage systems to use files. The file collection must be handled at the component implementation level.
-
The input and output connections between two components can only be of the Flow or Reject types. Other types of connections are not supported.
-
Every Output component must have a corresponding Input component and use a dataset. All datasets must use a datastore.
Integrating components into Talend Cloud
Learn about the Component Server with the following articles:
Component server and HTTP API
HTTP API
The HTTP API intends to expose most Talend Component Kit features over HTTP. It is a standalone Java HTTP server.
The WebSocket protocol is activated for the endpoints. Endpoints then use /websocket/v1 as base instead of /api/v1 . See WebSocket for more details.
|
To make sure that the migration can be enabled, you need to set the version the component was created with in the execution configuration that you send to the server (component version is in component the detail endpoint). To do that, use tcomp::component::version key.
|
Deprecated endpoints
Endpoints that are intended to disappear will be deprecated. A X-Talend-Warning
header will be returned with a message as value.
WebSocket transport
You can connect yo any endpoint by:
-
Replacing
/api
with/websocket
-
Appending
/<http method>
to the URL -
Formatting the request as:
SEND
destination: <endpoint after v1>
<headers>
<payload>^@
For example:
SEND
destination: /component/index
Accept: application/json
^@
The response is formatted as follows:
MESSAGE
status: <http status code>
<headers>
<payload>^@
All endpoints are logged at startup. You can then find them in the logs if you have a doubt about which one to use. |
If you don’t want to create a pool of connections per endpoint/verb, you can use the bus endpoint: /websocket/v1/bus
.
This endpoint requires that you add the destinationMethod
header to each request with the verb value (GET
by default):
SEND
destination: /component/index
destinationMethod: GET
Accept: application/json
^@
Server configuration
the configuration is read from system properties, environment variables, …. |
- talend.component.server.cache.maxSize
-
Default value:
1000
. Maximum items a cache can store, used for index endpoints. - talend.component.server.component.coordinates
-
A comma separated list of gav to locate the components
- talend.component.server.component.documentation.translations
-
Default value:
${home}/documentations
. A component translation repository. This is where you put your documentation translations. Their name must follow the patterndocumentation_${container-id}_language.adoc
where${container-id}
is the component jar name (without the extension and version, generally the artifactId). - talend.component.server.component.extend.dependencies
-
Default value:
true
. Should the component extensions add required dependencies. - talend.component.server.component.extension.maven.repository
-
If you deploy some extension, where they can create their dependencies if needed.
- talend.component.server.component.extension.startup.timeout
-
Default value:
180000
. Timeout for extension initialization at startup, since it ensures the startup wait extensions are ready and loaded it allows to control the latency it implies. - talend.component.server.component.registry
-
A property file (or multiple comma separated) where the value is a gav of a component to register(complementary with
coordinates
). Note that the path can end up withor
.properties
to take into account all properties in a folder. - talend.component.server.documentation.active
-
Default value:
true
. Should the /documentation endpoint be activated. Note that when called on localhost the doc is always available. - talend.component.server.environment.active
-
Default value:
true
. Should the /api/v1/environment endpoint be activated. It shows some internal versions and git commit which are not always desirable over the wire. - talend.component.server.gridlayout.translation.support
-
Default value:
false
. Should the components using a@GridLayout
support tab translation. Studio does not suppot that feature yet so this is not enabled by default. - talend.component.server.icon.paths
-
Default value:
icons/%s.svg,icons/svg/%s.svg,icons/%s_icon32.png,icons/png/%s_icon32.png
. These patterns are used to find the icons in the classpath(s). - talend.component.server.jaxrs.exceptionhandler.defaultMessage
-
Default value:
false
. If set it will replace any message for exceptions. Set tofalse
to use the actual exception message. - talend.component.server.lastUpdated.useStartTime
-
Default value:
false
. Should the lastUpdated timestamp value of/environment
endpoint be updated with server start time. - talend.component.server.locale.mapping
-
Default value:
en*=en fr*=fr zh*=zh_CN ja*=ja de*=de
. For caching reasons the goal is to reduce the locales to the minimum required numbers. For instance we avoidfr
andfr_FR
which would lead to the same entries but x2 in terms of memory. This mapping enables that by whitelisting allowed locales, default beingen
. If the key ends withit means all string starting with the prefix will match. For instance
fr
will matchfr_FR
but alsofr_CA
. - talend.component.server.maven.repository
-
The local maven repository used to locate components and their dependencies
- talend.component.server.plugins.reloading.active
-
Default value:
false
. Should the plugins be un-deployed and re-deployed. - talend.component.server.plugins.reloading.interval
-
Default value:
600
. Interval in seconds between each check if plugins re-loading is enabled. - talend.component.server.plugins.reloading.marker
-
Specify a file to check its timestamp on the filesystem. This file will take precedence of the default ones provided by the
talend.component.server.component.registry
property (used for timestamp method). - talend.component.server.plugins.reloading.method
-
Default value:
timestamp
. Re-deploy method on atimestamp
orconnectors
version change. By default, the timestamp is checked on the file pointed bytalend.component.server.component.registry
ortalend.component.server.plugins.reloading.marker
variable, otherwise we inspect the content of theCONNECTORS_VERSION
file. Accepted values:timestamp
, anything else defaults toconnectors
. - talend.component.server.request.log
-
Default value:
false
. Should the all requests/responses be logged (debug purposes - only work when running with CXF). - talend.component.server.security.command.handler
-
Default value:
securityNoopHandler
. How to validate a command/request. Accepted values: securityNoopHandler. - talend.component.server.security.connection.handler
-
Default value:
securityNoopHandler
. How to validate a connection. Accepted values: securityNoopHandler. - talend.component.server.user.extensions.location
-
A folder available for the server - don’t forget to mount it in docker if you are using the image - which accepts subfolders named as component plugin id (generally the artifactId or jar name without the version, ex: jdbc). Each family folder can contain:
-
a
user-configuration.properties
file which will be merged with component configuration system (see services). This properties file enables the functionuserJar(xxxx)
to replace the jar namedxxxx
by its virtual gav (groupId:artifactId:version
), -
a list of jars which will be merged with component family classpath
-
- talend.component.server.user.extensions.provisioning.location
-
Default value:
auto
. Should the implicit artifacts be provisionned to a m2. If set toauto
it tries to detect if there is a m2 to provision - recommended, if set toskip
it is ignored, else it uses the value as a m2 path.
Configuration mechanism
The configuration uses Microprofile Config for most entries. It means it can be passed through system properties and environment variables (by replacing dots with underscores and making the keys uppercase).
To configure a Docker image rather than a standalone instance, Docker Config and secrets integration allows you to read the configuration from files. You can customize the configuration of these integrations through system properties.
Docker integration provides a secure:
support to encrypt values and system properties, when required.
It is fully implemented using the Apache Geronimo Microprofile Config extensions.
HTTPS activation
Using the server ZIP (or Docker image), you can configure HTTPS by adding properties to _JAVA_OPTIONS
. Assuming that you have a certificate in /opt/certificates/component.p12
(don’t forget to add/mount it in the Docker image if you use it), you can activate it as follows:
# use -e for Docker and `--https=8443` to set the port
#
# this skips the http port binding and only binds https on the port 8443, and setups the correct certificate
export _JAVA_OPTIONS="-Dskip-http=true -Dssl=true -Dhttps=8443 -Dkeystore-type=PKCS12 -Dkeystore-alias=talend -Dkeystore-password=talend -Dkeystore-file=/opt/certificates/component.p12"
Defining queries
You can define simple queries on the configuration types and components endpoints. These two endpoints support different parameters.
Queries on the configurationtype/index
endpoint supports the following parameters:
-
type
-
id
-
name
-
metadata
of the first configuration property as parameters.
Queries on the component/index
endpoint supports the following parameters:
-
plugin
-
name
-
id
-
familyId
-
metadata
of the first configuration property as parameters.
In both cases, you can combine several conditions using OR
and AND
operators. If you combine more than two conditions, note that they are evaluated in the order they are written.
Each supported parameter in a condition can be "equal to" (=
) or "not equal to" (!=
) a defined value (case-sensitive).
For example:
(metadata[configurationtype::type] = dataset) AND (plugin = jdbc-component) OR (name = input)
In this example, the query gets components that have a dataset and belong to the jdbc-component plugin, or components that are named input
.
Web forms and REST API
The component-form
library provides a way to build a component REST API facade that is compatible with React form library.
for example:
@Path("tacokit-facade")
@ApplicationScoped
public class ComponentFacade {
private static final String[] EMPTY_ARRAY = new String[0];
@Inject
private Client client;
@Inject
private ActionService actionService;
@Inject
private UiSpecService uiSpecService;
@Inject // assuming it is available in your app, use any client you want
private WebTarget target;
@POST
@Path("action")
public void action(@Suspended final AsyncResponse response, @QueryParam("family") final String family,
@QueryParam("type") final String type, @QueryParam("action") final String action,
final Map<String, Object> params) {
client.action(family, type, action, params).handle((r, e) -> {
if (e != null) {
onException(response, e);
} else {
response.resume(actionService.map(type, r));
}
return null;
});
}
@GET
@Path("index")
public void getIndex(@Suspended final AsyncResponse response,
@QueryParam("language") @DefaultValue("en") final String language) {
target
.path("component/index")
.queryParam("language", language)
.request(APPLICATION_JSON_TYPE)
.rx()
.get(ComponentIndices.class)
.toCompletableFuture()
.handle((index, e) -> {
if (e != null) {
onException(response, e);
} else {
index.getComponents().stream().flatMap(c -> c.getLinks().stream()).forEach(
link -> link.setPath(link.getPath().replaceFirst("/component/", "/application/").replace(
"/details?identifiers=", "/detail/")));
response.resume(index);
}
return null;
});
}
@GET
@Path("detail/{id}")
public void getDetail(@Suspended final AsyncResponse response,
@QueryParam("language") @DefaultValue("en") final String language, @PathParam("id") final String id) {
target
.path("component/details")
.queryParam("language", language)
.queryParam("identifiers", id)
.request(APPLICATION_JSON_TYPE)
.rx()
.get(ComponentDetailList.class)
.toCompletableFuture()
.thenCompose(result -> uiSpecService.convert(result.getDetails().iterator().next()))
.handle((result, e) -> {
if (e != null) {
onException(response, e);
} else {
response.resume(result);
}
return null;
});
}
private void onException(final AsyncResponse response, final Throwable e) {
final UiActionResult payload;
final int status;
if (WebException.class.isInstance(e)) {
final WebException we = WebException.class.cast(e);
status = we.getStatus();
payload = actionService.map(we);
} else if (CompletionException.class.isInstance(e)) {
final CompletionException actualException = CompletionException.class.cast(e);
log.error(actualException.getMessage(), actualException);
status = Response.Status.BAD_GATEWAY.getStatusCode();
payload = actionService.map(new WebException(actualException, -1, emptyMap()));
} else {
log.error(e.getMessage(), e);
status = Response.Status.BAD_GATEWAY.getStatusCode();
payload = actionService.map(new WebException(e, -1, emptyMap()));
}
response.resume(new WebApplicationException(Response.status(status).entity(payload).build()));
}
}
the Client can be created using ClientFactory.createDefault(System.getProperty("app.components.base", "http://localhost:8080/api/v1")) and the service can be a simple new UiSpecService<>() . The factory uses JAX-RS if the API is available (assuming a JSON-B provider is registered). Otherwise, it tries to use Spring.
|
The conversion from the component model (REST API) to the uiSpec model is done through UiSpecService
. It is based on the object model which is mapped to a UI model. Having a flat model in the component REST API allows to customize layers easily.
You can completely control the available components, tune the rendering by switching the uiSchema
, and add or remove parts of the form.
You can also add custom actions and buttons for specific needs of the application.
The /migrate endpoint was not shown in the previous snippet but if you need it, add it as well.
|
Using the UiSpec model without the tooling
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-form-model</artifactId>
<version>${talend-component-kit.version}</version>
</dependency>
This Maven dependency provides the UISpec model classes. You can use the Ui
API (with or without the builders) to create UiSpec representations.
For Example:
final Ui form1 = ui()
.withJsonSchema(JsonSchema.jsonSchemaFrom(Form1.class).build()) (1)
.withUiSchema(uiSchema() (2)
.withKey("multiSelectTag")
.withRestricted(false)
.withTitle("Simple multiSelectTag")
.withDescription("This data list accepts values that are not in the list of suggestions")
.withWidget("multiSelectTag")
.build())
.withProperties(myFormInstance) (3)
.build();
final String json = jsonb.toJson(form1); (4)
1 | The JsonSchema is extracted from reflection on the Form1 class. @JsonSchemaIgnore allows to ignore a field and @JsonSchemaProperty allows to rename a property. |
2 | A UiSchema is programmatically built using the builder API. |
3 | An instance of the form is passed to let the serializer extract its JSON model. |
4 | The Ui model, which can be used by UiSpec compatible front widgets, is serialized. |
The model uses the JSON-B API to define the binding. Make sure to have an implementation in your classpath. To do that, add the following dependencies:
<dependency>
<groupId>org.apache.geronimo.specs</groupId>
<artifactId>geronimo-jsonb_1.0_spec</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.apache.geronimo.specs</groupId>
<artifactId>geronimo-json_1.1_spec</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.apache.johnzon</groupId>
<artifactId>johnzon-jsonb</artifactId>
<version>${johnzon.version}</version> <!-- 1.1.5 for instance -->
</dependency>
Using the UiSpec for custom models
The following module enables you to define through annotations a uispec on your own models:
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-uispec-mapper</artifactId>
<version>${talend-component-kit.version}</version>
</dependency>
this can’t be used in components and is only intended for web applications. |
org.talend.sdk.component.form.uispec.mapper.api.service.UiSpecMapper
enables to create a Ui
instance from a custom type annotated with
org.talend.sdk.component.form.uispec.mapper.api.model.View
and org.talend.sdk.component.form.uispec.mapper.api.model.View.Schema
.
UiSpecMapper returns a Supplier and not directly an Ui because the ui-schema is re-evaluated when `get()Ì€ is called.
This enables to update the title maps for example.
|
Here is an example:
@Data
public abstract class BaseModel {
@View.Skip
private String id;
@View.Skip
private Date created;
@View.Skip
private Date updated;
@View.Schema(type = "hidden", readOnly = true)
private long version;
}
@Data
@ToString(callSuper = true)
@EqualsAndHashCode(callSuper = true)
public class ComponentModel extends BaseModel {
@View.Schema(length = 1024, required = true, position = 1, reference = "vendors")
private String vendor;
@View.Schema(length = 2048, required = true, position = 2)
private String name;
@View.Schema(length = 2048, required = true, position = 3)
private String license;
@View.Schema(length = 2048, required = true, position = 4)
private String sources;
@View.Schema(length = 2048, required = true, position = 5)
private String bugtracker;
@View.Schema(length = 2048, required = true, position = 6)
private String documentation;
@View.Schema(widget = "textarea", length = 8192, required = true, position = 7)
private String description;
@View.Schema(widget = "textarea", length = 8192, position = 8)
private String changelog;
}
This API maps directly the UiSpec model (json schema and ui schema of Talend UIForm
).
The default implementation of the mapper is available at org.talend.sdk.component.form.uispec.mapper.impl.UiSpecMapperImpl
.
Here is an example:
private UiSpecMapper mapper = new UiSpecMapperImpl(new Configuration(getTitleMapProviders()));
@GET
public Ui getNewComponentModelForm() {
return mapper.createFormFor(ComponentModel.class).get();
}
@GET
@Path("{id}")
public Ui editComponentModelForm(final @PathParam("id") final String id) {
final ComponentModel component = findComponent(id);
final Ui spec = getNewComponentModelForm();
spec.setProperties(component);
return spec;
}
The getTitleMapProviders()
method will generally lookup a set of TitleMapProvider
instances in your IoC context.
This API is used to fill the titleMap
of the form when a reference identifier is set on the @Schema
annotation.
JavaScript integration
component-kit.js is no more available (previous versions stay on NPM) and is replaced by @talend/react-containers .
The previous import can be replaced by import kit from '@talend/react-containers/lib/ComponentForm/kit'; .
|
Default JavaScript integration goes through the Talend UI Forms library and its Containers wrapper.
Documentation is now available on the previous link.
Logging
The logging uses Log4j2. You can specify a custom configuration by using the -Dlog4j.configurationFile
system property or by adding a log4j2.xml
file to the classpath.
Here are some common configurations:
-
Console logging:
<?xml version="1.0"?>
<Configuration status="INFO">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="[%d{HH:mm:ss.SSS}][%highlight{%-5level}][%15.15t][%30.30logger] %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Output messages look like:
[16:59:58.198][INFO ][ main][oyote.http11.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-34763"]
-
JSON logging:
<?xml version="1.0"?>
<Configuration status="INFO">
<Properties>
<!-- DO NOT PUT logSource there, it is useless and slow -->
<Property name="jsonLayout">{"severity":"%level","logMessage":"%encode{%message}{JSON}","logTimestamp":"%d{ISO8601}{UTC}","eventUUID":"%uuid{RANDOM}","@version":"1","logger.name":"%encode{%logger}{JSON}","host.name":"${hostName}","threadName":"%encode{%thread}{JSON}","stackTrace":"%encode{%xThrowable{full}}{JSON}"}%n</Property>
</Properties>
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="${jsonLayout}"/>
</Console>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Output messages look like:
{"severity":"INFO","logMessage":"Initializing ProtocolHandler [\"http-nio-46421\"]","logTimestamp":"2017-11-20T16:04:01,763","eventUUID":"8b998e17-7045-461c-8acb-c43f21d995ff","@version":"1","logger.name":"org.apache.coyote.http11.Http11NioProtocol","host.name":"TLND-RMANNIBUCAU","threadName":"main","stackTrace":""}
-
Rolling file appender:
<?xml version="1.0"?>
<Configuration status="INFO">
<Appenders>
<RollingRandomAccessFile name="File" fileName="${LOG_PATH}/application.log" filePattern="${LOG_PATH}/application-%d{yyyy-MM-dd}.log">
<PatternLayout pattern="[%d{HH:mm:ss.SSS}][%highlight{%-5level}][%15.15t][%30.30logger] %msg%n"/>
<Policies>
<SizeBasedTriggeringPolicy size="100 MB" />
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
</RollingRandomAccessFile>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="File"/>
</Root>
</Loggers>
</Configuration>
More details are available in the RollingFileAppender documentation.
You can compose previous layout (message format) and appenders (where logs are written). |
Docker
The server image is deployed on Docker. Its version is suffixed with a timestamp to ensure images are not overridden and can break your usage. You can check the available version on Docker hub.
Run
You can run the docker image by executing this command :
$ sudo docker run -p 8080:8080 tacokit/component-starter
Configure
You can set the env variable _JAVA_OPTIONS
to customize the server, by default it is installed in /opt/talend/component-kit
.
Maven repository
The maven repository is the default one of the machine, you can change it setting the system property talend.component.server.maven.repository=/path/to/your/m2
.
Deploy components to the server
If you want to deploy some components you can configure which ones in _JAVA_OPTIONS (see server doc online) and redirect your local m2:
$ docker run \
-p 8080:8080 \
-v ~/.m2:/root/.m2 \
-e _JAVA_OPTIONS="-Dtalend.component.server.component.coordinates=g:a:v,g2:a2:v2,..." \
component-server
Logging
The component server docker image comes with two log4j2 profiles: TEXT
(default) and JSON
.
The logging profile can be changed by setting the environment variable LOGGING_LAYOUT
to JSON
.
Note that Component Server adds to these default Talend profiles the KAFKA
profile. With this profile, all logs are sent to Kafka.
You can check the exact configuration in the component-runtime/images/component-server-image/src/main/resources folder.
|
default or TEXT profile
The console logging is on at INFO
level by default. You can customize it by setting the CONSOLE_LOG_LEVEL
environment variable to DEBUG
, INFO
, WARN
or to any other log level supported by log4j2.
Run docker image with console logging:
sudo docker run -p 8080:8080 \
-e CONSOLE_LOG_LEVEL=DEBUG \
component-server
JSON profile
The JSON profile does the following:
-
Logs on the console using the
CONSOLE_LOG_LEVEL
configuration as the default profile. It uses the formatting shown below. -
If the
TRACING_KAFKA_URL
environment variable is set, it logs the opentracing data on the defined Kafka using the topicTRACING_KAFKA_TOPIC
. This level can be customized by setting theKAFKA_LOG_LEVEL
environment variable (INFO
by default).
Events are logged in the following format:
{
"eventUUID":"%uuid{RANDOM}",
"correlationId":"%X{traceId}",
"spanId":"%X{spanId}",
"traceId":"%X{traceId}",
"category":"components",
"eventType":"LOGEvent",
"severity":"%level",
"logMessage":"%encode{%message}{JSON}",
"logSource":{
"class.name":"%class",
"file.name":"%file",
"host.name":"%X{hostname}",
"line.number":"%line",
"logger.name":"%logger",
"method.name":"%method",
"process.id":"%pid"
},
"service":"${env:LOG_SERVICE_NAME:-component-server}",
"application":"${env:LOG_APP_NAME:-component-server}",
"exportable":"${env:LOG_EXPORTABLE:-true}",
"audit":"${env:LOG_AUDIT:-false}",
"logTimestamp":"%d{ISO8601}{UTC}",
"serverTimestamp":"%d{ISO8601}{UTC}",
"customInfo":{
"threadName":"%encode{%thread}{JSON}",
"stackTrace":"%encode{%xThrowable{full}}{JSON}"
}
}
KAFKA profile
This profile is very close to the JSON profile and also adds the LOG_KAFKA_TOPIC
and LOG_KAFKA_URL
configuration. The difference is that it logs the default logs on Kafka in addition to the tracing logs.
OpenTracing
The component server uses Geronimo OpenTracing to monitor request.
The tracing can be activated by setting the TRACING_ON
environment variable to true
.
The tracing rate is configurable by setting the TRACING_SAMPLING_RATE
environment variable.
It accepts 0
(none) and 1
(all, default) as values to ensure the consistency of the reporting.
You can find all the details on the configuration in org.talend.sdk.component.server.configuration.OpenTracingConfigSource .
|
Run docker image with tracing on:
sudo docker run -p 8080:8080 \
-e TRACING_ON=true \
tacokit/component-server
By default, Geronimo OpenTracing will log the spans in a Zipking format so you can use the Kafka profile as explained before to wire it over any OpenTracing backend.
Building the docker image
You can register component server images in Docker using these instructions in the corresponding image directory:
# ex: cd images/component-server-image
mvn clean compile jib:dockerBuild
Integrating components into the image
Docker Compose
Docker Compose allows you to deploy the server with components, by mounting the component volume into the server image.
docker-compose.yml
example:
version: '3.2'
services:
component-server:
healthcheck:
timeout: 3s
interval: 3s
retries: 3
test: curl --fail http://localhost:1234/api/v1/environment
image: tacokit/component-server:${COMPONENT_SERVER_IMAGE:-1.1.2_20181108161652}
command: --http=1234
environment:
- CONSOLE_LOG_LEVEL=INFO
- _JAVA_OPTIONS=
-Xmx1024m
-Dtalend.component.server.component.registry=/opt/talend/connectors/component-registry.properties
-Dtalend.component.server.maven.repository=/opt/talend/connectors
ports:
- 1234:1234/tcp
volumes:
- type: bind
read_only: true
source: ${CONNECTORS_REPOSITORY}
target: /opt/talend/connectors
volume:
nocopy: true
If you want to mount it from another image, you can use this compose configuration:
version: '3.2'
services:
component-server:
healthcheck:
timeout: 3s
interval: 3s
retries: 3
test: curl --fail http://localhost:1234/api/v1/environment
image: tacokit/component-server:${COMPONENT_SERVER_IMAGE_VERSION}
command: --http=1234
environment:
- _JAVA_OPTIONS=
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
-Djava.library.path=/opt/talend/component-kit/work/sigar/sigar:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
-Xmx1024m
-Dtalend.component.server.component.registry=/opt/talend/connectors/component-registry.properties
-Dtalend.component.server.maven.repository=/opt/talend/connectors
ports:
- 1234:1234/tcp
- 5005:5005/tcp
volumes:
- connectors:/opt/talend/connectors:ro
connectors:
image: talend/connectors:${CONNECTORS_VERSION}
environment:
- CONNECTORS_SETUP_OPTS=setup --wait-for-end --component-jdbc-auto-download-drivers
volumes:
- connectors:/opt/talend/connectors:ro
volumes:
connectors:
To run one of the previous compose examples, you can use docker-compose -f docker-compose.yml up
.
Only use the configuration related to port 5005 (in ports and the -agentlib option in _JAVA_OPTIONS ) to debug the server on port 5005 . Don’t set it in production.
|
Adding extensions to the server
You can mount a volume in /opt/talend/component-kit/custom/
and the jars in that folder which will be deployed with the server.
Since the server relies on CDI (Apache OpenWebBeans) you can use that technology to enrich it, including JAX-RS endpoints, interceptors etc…or just libraries needing to be in the JVM.
Component Server Vault Proxy
Browse the API description using OpenAPI.
A Vault proxy dedicated to the component server allows to safely manage credential encryption when operating in a Cloud environment.
The Vault Proxy only concerns data marked as @Credential
in the component configuration.
This Vault proxy:
-
receives an encrypted payload containing sensitive data from the remote engine, via HTTP.
-
decrypts the data
-
caches in memory the decrypted value of the data for performance reasons.
-
sends unencrypted data to the component server using HTTPS. An SSL certificate can be automatically generated and secures the data.
Vault proxy configuration
the configuration is read from system properties, environment variables, …. |
- talend.vault.cache.client.certificate.acceptAny
-
Default value:
false
. Should any certificate be accepted - only for dev purposes. - talend.vault.cache.client.executor.server.core
-
Default value:
64
. Thread pool core size for Component Server client. - talend.vault.cache.client.executor.server.keepAlive
-
Default value:
60000
. Thread keep alive (in ms) for Component Server client thread pool. - talend.vault.cache.client.executor.server.max
-
Default value:
256
. Thread pool max size for Component Server client. - talend.vault.cache.client.executor.vault.core
-
Default value:
64
. Thread pool core size for Vault client. - talend.vault.cache.client.executor.vault.keepAlive
-
Default value:
60000
. Thread keep alive (in ms) for Vault client thread pool. - talend.vault.cache.client.executor.vault.max
-
Default value:
256
. Thread pool max size for Vault client. - talend.vault.cache.client.providers
-
JAX-RS fully qualified name of the provides (message body readers/writers) for vault and component-server clients.
- talend.vault.cache.client.server.authorization
-
The token to use to call component-server if any.
- talend.vault.cache.client.server.certificate.keystore.location
-
Where the keystore to use to connect to Component Server is located.
- talend.vault.cache.client.server.certificate.keystore.password
-
Default value:
changeit
. The keystore password fortalend.vault.cache.client.server.certificate.keystore.location
. - talend.vault.cache.client.server.certificate.keystore.type
-
The keystore type for
talend.vault.cache.client.server.certificate.keystore.location
. - talend.vault.cache.client.server.certificate.truststore.type
-
The truststore type for
talend.vault.cache.client.server.certificate.keystore.location
. - talend.vault.cache.client.server.hostname.accepted
-
Default value:
localhost,127.0.0.1,0:0:0:0:0:0:0:1
. Valid hostnames for the Component Server certificates (seejava.net.ssl.HostnameVerifier
). - talend.vault.cache.client.timeout.connect
-
Default value:
30000
. HTTP connection timeout to vault server. - talend.vault.cache.client.timeout.read
-
Default value:
30000
. HTTP read timeout to vault server. - talend.vault.cache.client.vault.certificate.keystore.location
-
Where the keystore to use to connect to vault is located.
- talend.vault.cache.client.vault.certificate.keystore.password
-
Default value:
changeit
. The keystore password fortalend.vault.cache.client.vault.certificate.keystore.location
. - talend.vault.cache.client.vault.certificate.keystore.type
-
The keystore type for
talend.vault.cache.client.vault.certificate.keystore.location
. - talend.vault.cache.client.vault.certificate.truststore.type
-
The truststore type for
talend.vault.cache.client.vault.certificate.keystore.location
. - talend.vault.cache.client.vault.hostname.accepted
-
Default value:
localhost,127.0.0.1,0:0:0:0:0:0:0:1
. Valid hostnames for the Vault certificates (seejava.net.ssl.HostnameVerifier
). - talend.vault.cache.jcache.cache.expiry
-
Default value:
3600
. JCache expiry for decrypted values (ms). - talend.vault.cache.jcache.cache.management
-
Default value:
false
. Should JCache MBeans be registered. - talend.vault.cache.jcache.cache.statistics
-
Default value:
false
. Should JCache statistics be enabled. - talend.vault.cache.jcache.manager.properties
-
Default value:
`. JCache `CacheManager
properties used to initialized the instance. - talend.vault.cache.jcache.manager.uri
-
Default value:
geronimo://simple-jcache.properties
. Configuration for JCache setup, default implementation is Geronimo Simple Cache. - talend.vault.cache.jcache.maxCacheSize
-
Default value:
100000
. JCache max size per cache. - talend.vault.cache.jcache.refresh.period
-
Default value:
30000
. How often (in ms) the Component Server should be checked to invalidate the caches on the component parameters (to identify credentials). - talend.vault.cache.security.allowedIps
-
Default value:
localhost,127.0.0.1,0:0:0:0:0:0:0:1
. The IP or hosts allowed to call that server on/api/*
if no token is passed. - talend.vault.cache.security.hostname.sanitizer
-
Default value:
none
. Enable to sanitize the hostname before testing them. Default tonone
which is a noop. Supported values aredocker
(for<folder>_<service>_<number>.<folder>_<network>
pattern) andweave
(for<prefix>_dataset_<number>.<suffix>
pattern). - talend.vault.cache.security.tokens
-
Default value:
-
. The tokens enabling a client to call this server without being inallowedIp
whitelist. - talend.vault.cache.service.auth.cantDecipherStatusCode
-
Default value:
422
. Status code sent when vault can’t decipher some values. - talend.vault.cache.service.auth.refreshDelayMargin
-
Default value:
600000
. How often (in ms) to refresh the vault token. - talend.vault.cache.service.auth.refreshDelayOnFailure
-
Default value:
10000
. How often (in ms) to refresh the vault token in case of an authentication failure. - talend.vault.cache.service.decipher.skip.regex
-
Default value:
vault\:v[0-9]+\:.*
. The regex to whitelist ciphered keys, others will be passthrough in the output without going to vault. - talend.vault.cache.talendComponentKit.url
-
Base URL to connect to Component Server
- talend.vault.cache.vault.auth.endpoint
-
Default value:
v1/auth/engines/login
. The vault path to retrieve a token. - talend.vault.cache.vault.auth.roleId
-
Default value:
-
. The vault role identifier to use to log in (if token is not set).-
means it is ignored. - talend.vault.cache.vault.auth.secretId
-
Default value:
-
. The vault secret identifier to use to log in (if token is not set).-
means it is ignored. - talend.vault.cache.vault.auth.token
-
Default value:
-
. The vault token to use to log in (will make roleId and secretId ignored).-
means it is ignored. - talend.vault.cache.vault.decrypt.endpoint
-
Default value:
v1/tenants-keyrings/decrypt/{x-talend-tenant-id}
. The vault path to decrypt values. You can use the variable{x-talend-tenant-id}
to replace byx-talend-tenant-id
header value. - talend.vault.cache.vault.url
-
Base URL to connect to Vault.
Configuration mechanism
The configuration uses Microprofile Config for most entries. It means it can be passed through system properties and environment variables (by replacing dots with underscores and making the keys uppercase).
To configure a Docker image rather than a standalone instance, Docker Config and secrets integration allows you to read the configuration from files. You can customize the configuration of these integrations through system properties.
Docker integration provides a secure:
support to encrypt values and system properties, when required.
It is fully implemented using the Apache Geronimo Microprofile Config extensions.
Adding the Vault Proxy to your Docker Compose
The YAML below is the recommended configuration to enable the Vault Proxy and Component Server to communicate over HTTPS.
Docker Compose
version: '3.2'
services:
component-server: (1)
healthcheck:
timeout: 3s
interval: 3s
retries: 3
test: curl --fail http://localhost:8080/api/v1/environment
image: tacokit/component-server:${COMPONENT_SERVER_IMAGE_VERSION:-1.1.6_20190208104207}
environment:
- _JAVA_OPTIONS=
-Dtalend.component.server.filter.secured.tokens=vault-proxy (2)
-Dtalend.component.server.ssl.active=true (3)
-Dtalend.component.server.ssl.keystore.location=/opt/talend/configuration/https.p12 (3)
-Dtalend.component.server.ssl.keystore.type=PKCS12 (3)
-Dtalend.component.server.component.registry=/opt/talend/connectors/component-registry.properties (4)
-Dtalend.component.server.maven.repository=/opt/talend/connectors (4)
volumes:
- connectors:/opt/talend/connectors:ro (4)
- vault-proxy-configuration:/opt/talend/configuration (3)
component-server-vault-proxy: (5)
healthcheck:
timeout: 3s
interval: 3s
retries: 3
test: curl --fail http://localhost:8080/api/v1/proxy/environment
image: tacokit/component-server-vault-proxy:${COMPONENT_SERVER_VAULT_PROXY_IMAGE_VERSION:-1.1.6_20190208104221}
environment:
- _JAVA_OPTIONS=
-Dtalend.vault.cache.client.server.certificate.keystore.location=/opt/talend/configuration/https.p12 (6)
-Dtalend.vault.cache.client.server.certificate.keystore.type=PKCS12 (6)
-Dtalend.vault.cache.client.server.hostname.accepted=component-server (6)
-Dtalend.vault.cache.client.server.authorization=vault-proxy (7)
-Dtalend.vault.cache.talendComponentKit.url=https://component-server:8080/api/v1 (7)
-Dtalend.vault.cache.vault.url=http://vault:8200 (8)
-Dtalend.vault.cache.vault.auth.roleId=myrole (8)
-Dtalend.vault.cache.vault.decrypt.endpoint=v1/something/decrypt/00000001 (8)
-Dtalend.vault.cache.security.allowedIps=${COMPONENT_SERVER_VAULT_PROXY_CLIENT_IP:-127.0.0.1} (9)
ports:
- 9090:8080/tcp
links: (10)
- "component-server:component-server"
# - "vault:vault"
volumes:
- vault-proxy-configuration:/opt/talend/configuration:ro
connectors: (11)
image: registry.datapwn.com/talend/connectors:${CONNECTORS_IMAGE_VERSION:-1.0.0_master_20190208091312}
environment:
- CONNECTORS_SETUP_OPTS=setup --wait-for-end
volumes:
- connectors:/opt/talend/connectors:ro
volumes: (12)
connectors:
vault-proxy-configuration:
1 | The standard Component Server entry. |
2 | Ensures only a client with a particular token can call the server. It is similar to a shared secret and only allows to call the server in "remote" mode, since only the local mode is enabled by default. |
3 | Activates and configures the auto generation of a X.509 certificate which is used for the HTTPS connector on the server. |
4 | Binds the components to deploy into the server. |
5 | Definition of the Vault proxy service which handles the mediation between Vault and the Component Server. |
6 | Since both servers are colocalized, the generated certificate is inherited from the Component Server, which allows to create the client that connects to it. |
7 | Configuration of the base URL to connect to the server - see <10>. |
8 | Configuration of the vault connection and security information. |
9 | Ensuring that connecting from $COMPONENT_SERVER_VAULT_PROXY_CLIENT_IP to vault proxy is possible. Any other IP will be rejected. |
10 | Linking both services so they can communicate. It allows not to expose the Component Server port outside of its own container (no ports mapping in the Component Server service definition). Note that if your vault is a service, you can link it here as well. |
11 | Common component image service definition. |
12 | Volumes used by the services. The connectors has not changed but vault-proxy-configuration was added for the automatic HTTPS configuration. |
This setup enables the Vault Proxy and Component Server to communicate. You can now use the Vault Proxy as if it was the Component Server, by using localhost:9090
(or any other host matching your deployment) instead of the Component Server directly.
Linking the Vault Proxy to the Component Server through HTTPS
When the Vault Proxy is enabled, ensure you configure HTTPS on the Component Server using the following parameters:
- talend.component.server.ssl.active
-
true
orfalse
. Indicates if the SSL protocol is enabled. - talend.component.server.ssl.password
-
Keystore password.
- talend.component.server.ssl.keystore.location
-
Path to Keystore.
- talend.component.server.ssl.keystore.alias
-
Private key/certificate alias.
- talend.component.server.ssl.keystore.type
-
Keystore type.
- talend.component.server.ssl.keystore.generation.force
-
true
orfalse
. - talend.component.server.ssl.keystore.generation.command
-
Specifies if a custom command is to be used to generate the certificate.
- talend.component.server.ssl.keypair.algorithm
-
Encryption algorithm.
RSA
by default. - talend.component.server.ssl.certificate.dname
-
Distinguished name.
- talend.component.server.ssl.keypair.size
-
Size of the key.
2048
by default. - talend.component.server.ssl.port
-
SSL port to use.
Adding extensions to the instance
You can mount a volume in /opt/talend/component-kit-vault-proxy/custom/
and the jars in that folder which will be deployed with the server.
Since the server relies on CDI (Apache OpenWebBeans) you can use that technology to enrich it, including JAX-RS endpoints, interceptors etc…or just libraries needing to be in the JVM.