This is a dotData fork of Akkeeper
git tag -a v0.4.12 -m "v0.4.12"
git push origin v0.4.12
Akkeeper is published to Maven Central, so
libraryDependencies ++= Seq(
"com.dotdata" %% "akkeeper-api" % "0.4.12",
"com.dotdata" %% "akkeeper-common" % "0.4.12",
"com.dotdata" %% "akkeeper-launcher" % "0.4.12",
"com.dotdata" %% "akkeeper-yarn" % "0.4.12",
)
Akkeeper (Akka Keeper or Actor Kernel Keeper) - is an easy way to deploy your Akka application to a distributed environment. Akka is a widely used Actor framework, but there are still no good practices and approaches of deploying applications that are based on this framework. Akkeeper provides a powerful set of capabilities to maintain your cluster. You can easily deploy, terminate and monitor your services at runtime. Akkeeper was built keeping Hadoop as a primary use case, that's why it currently supports only YARN as a resource manager. But this doesn't mean that other environments won't appear in future. Apache Spark and Apache Flink are good examples of Akka applications on Hadoop. Although both of them are data processing frameworks, I realised that YARN is not only MapReduce and can be used to distribute any kind of application. As a result your application acquires elasticity and resilience out of the box.
Some of the features provided by Akkeeper:
- Builds the Akka Cluster and automatically discovers all cluster participants.
- Allows to launch and terminate instances at runtime.
- Application Master fault tolerance. Rejoins the existing cluster after restart.
Here are several ways of how Akkeeper can be useful for your project:
- Distribute your microservices using Akkeeper.
- Keep your master service(s) separately and use Akkeeper to launch new workers/executors on demand.
The project documentation is under construction.
Requirements:
Java 8
SBT version >= 1.0.0
To build a bundle for Scala 2.11:
// ZIP
sbt ++2.11.11 universal:packageBin
// TGZ
sbt ++2.11.11 universal:packageZipTarball
Scala 2.12:
// ZIP
sbt ++2.12.6 universal:packageBin
// TGZ
sbt ++2.12.6 universal:packageZipTarball
To build examples:
sbt package
To understand how Akkeeper works you have to be aware of only two concepts: containers and instances.
Container defines an environment where the instance is running. Container determines how many resources should be allocated for the instance and what actors should be launched as part of this instance. Containers can be added, removed and modified dynamically at runtime.
Instance is an execution unit in Akkeeper. Instance is just a running process with capabilities and properties of its container. "Deploy container" or "launch instance of container container_name
" means the same - launching a process on some node in a cluster using the specified container's definition. Instances can be launched, monitored and terminated dynamically at runtime.
Download and unpack the latest Akkeeper package:
In order to use the Scala API the following dependency must be added to build.sbt
:
libraryDependencies += "com.github.izeigerman" %% "akkeeper-api" % "0.4.11"
In case if you need to launch Akkeeper from your code the following dependency must be introduced:
libraryDependencies += "com.github.izeigerman" %% "akkeeper-launcher" % "0.4.11"
The easiest way to start using Akkeeper is through the configuration file. Here is a quick start configuration file example:
akkeeper {
# The list of container definitions.
containers = [
{
# The unique name of the container.
name = "myContainer"
# The list of actors that will be launched in scope of this container.
actors = [
{
# The actor's name.
name = "myActor"
# The fully qualified name of the Actor implementation.
fqn = "com.test.MyActor"
}
]
# The number of CPUs that has to be allocated.
cpus = 1
# The amount of RAM in MB that has to be allocated.
memory = 1024
# Additional JVM arguments that will be passed to instances of this container.
jvm-args = [ "-Xmx2G" ]
# Custom Java properties. Can be used to override the configuration values.
properties {
myapp.myservice.property = "value"
akka.cluster.roles.0 = "myRole1"
akka.cluster.roles.1 = "myRole2"
}
}
]
}
Make sure your HADOOP_CONF_DIR
and YARN_CONF_DIR
environment variables point to the directory where the Hadoop configuration files are stored. Also ZK_QUORUM
variable should contain a comma-separated list of ZooKeeper servers.
Now just pass this file together with your JAR archive which contains actor com.test.MyActor
to Akkeeper:
./bin/akkeeper-submit --config ./config.conf /path/to/my.jar
This approach is applicable only when you're managing an Akkeeper cluster from the application or service which is part of the same Akka cluster.
Use this API to launch new instances on a cluster.
import akkeeper.api._
import akkeeper.master.service.DeployService
...
val actorSystem = ActorSystem("AkkeeperSystem")
// Create a remote Deploy Service actor reference.
val deployService = DeployService.createRemote(actorSystem)
// Launch 1 instance of container "myContainer".
(deployService ? DeployContainer("myContainer", 1)).onSuccess {
case SubmittedInstances(requestId, containerName, instanceIds) => // submitted successfully.
case OperationFailed(requestId, reason) => // deploy failed.
}
Use this API to track instance status or terminate a running instance.
import akkeeper.api._
import akkeeper.master.service.MonitoringService
...
val actorSystem = ActorSystem("AkkeeperSystem")
// Create a remote Monitoring Service actor reference.
val monitoringService = MonitoringService.createRemote(actorSystem)
// Fetch the list of running instances.
(monitoringService ? GetInstances()).onSuccess {
case InstancesList(requestId, instanceIds) => // a list of running instances.
case OperationFailed(requestId, reason) => // fetching process failed.
}
Here is the list of all messages supported by the Monitoring Service.
Use this API to fetch, create, update or delete container definitions.
import akkeeper.api._
import akkeeper.master.service.ContainerService
...
val actorSystem = ActorSystem("AkkeeperSystem")
// Create a remote Container Service actor reference.
val containerService = ContainerService.createRemote(actorSystem)
// Fetch the list of existing containers.
(containerService ? GetContainers()).onSuccess {
case ContainersList(requestId, containerNames) => // a list of available containers.
case OperationFailed(requestId, reason) => // fetching process failed.
}
Here is the list of all messages supported by the Container Service.