component-runtime-testing-spark
The following artifact allows you to test against a Spark cluster:
<dependency>
<groupId>org.talend.sdk.component</groupId>
<artifactId>component-runtime-testing-spark</artifactId>
<version>${talend-component.version}</version>
<scope>test</scope>
</dependency>
JUnit 4
The testing relies on a JUnit TestRule
. It is recommended to use it as a @ClassRule
, to make sure that a single instance of a Spark cluster is built. You can also use it as a simple @Rule
, to create the Spark cluster instances per method instead of per test class.
The @ClassRule
takes the Spark and Scala versions to use as parameters. It then forks a master and N slaves.
Finally, the submit*
method allows you to send jobs either from the test classpath or from a shade if you run it as an integration test.
For example:
public class SparkClusterRuleTest {
@ClassRule
public static final SparkClusterRule SPARK = new SparkClusterRule("2.10", "1.6.3", 1);
@Test
public void classpathSubmit() throws IOException {
SPARK.submitClasspath(SubmittableMain.class, getMainArgs());
// wait for the test to pass
}
}
This testing methodology works with @Parameterized . You can submit several jobs with different arguments and even combine it with Beam TestPipeline if you make it transient .
|
JUnit 5
The integration of that Spark cluster logic with JUnit 5 is done using the @WithSpark
marker for the extension. Optionally, it allows you to inject—through @SparkInject
—the BaseSpark<?>
handler to access the Spark cluster meta information. For example, its host/port.
Example:
@WithSpark
class SparkExtensionTest {
@SparkInject
private BaseSpark<?> spark;
@Test
void classpathSubmit() throws IOException {
final File out = new File(jarLocation(SparkClusterRuleTest.class).getParentFile(), "classpathSubmitJunit5.out");
if (out.exists()) {
out.delete();
}
spark.submitClasspath(SparkClusterRuleTest.SubmittableMain.class, spark.getSparkMaster(), out.getAbsolutePath());
await().atMost(5, MINUTES).until(
() -> out.exists() ? Files.readAllLines(out.toPath()).stream().collect(joining("\n")).trim() : null,
equalTo("b -> 1\na -> 1"));
}
}
Checking the job execution status
Currently, SparkClusterRule
does not allow to know when a job execution is done, even by exposing and polling the web UI URL to check. The best solution at the moment is to make sure that the output of your job exists and contains the right value.
awaitability
or any equivalent library can help you to implement such logic:
<dependency>
<groupId>org.awaitility</groupId>
<artifactId>awaitility</artifactId>
<version>3.0.0</version>
<scope>test</scope>
</dependency>
To wait until a file exists and check that its content (for example) is the expected one, you can use the following logic:
await()
.atMost(5, MINUTES)
.until(
() -> out.exists() ? Files.readAllLines(out.toPath()).stream().collect(joining("\n")).trim() : null,
equalTo("the expected content of the file"));