Typically, users interested in Hadoop and Spark have data volumes and workloads that demand the power of cluster computing. However, some people use Spark for its expressive API, even if their data volumes are small or medium. Because Domino lets you run code on powerful VM infrastructure, with up the 32 cores in AWS, you can use Domino to create a local Spark cluster and easily parallelize your tasks across all 32 cores.