CoreCluster architecture

 

This document covers the internal architecture of CoreCluster, CoreNode modules and the communication channel.

CoreCluster IaaS cloud consists of three main modules: the corecluster package for management node, corenode dor all computing nodes and corenetwork for all above nodes, providing common libraries. This article covers all those parts and explains internal dependencies

API and CI

API is used by users, interfaces and libraries to manage resources in cloud driven by CoreCluster. This is normal, exposed set of enpoints for creating new instances, managing images etc. The second, similar interface is called Cluster Interface (CI). It is used for communication between compute nodes and management node, to notify about node health and instance states.

With CI each node notifies about its presence, shutdown and also about each state change of virtual machines on this node. CI is also used to obtain necessary configuration from CoreCluster Management, like routed networking settings and Corosync configuration.

Node state updates

Each time Compute Node is starting, cc-node command is called by systemd service. On node startup it is called once with configure, and second with the start parameters. First one prepares configuration with CI. Quagga, Corosync and Avahi configs are updated to versions provided by CoreCluster management.

On node shutdown cc-node is called with stop parameter.

Drivers

Each of above commands is handled by drivers defined in app.py file, of each enabled in /etc/corenode/config.py or /etc/corecluster/config.py application. The same mechanism is applied to CoreCluster and CoreNode packages. Read more about app.py here . Part responsible to driver definitions (in 17.04) contains following:

    'drivers': {
        'NETWORK_ROUTED_DRIVER': 'corenetwork.drivers.network_quagga',
        'NETWORK_ISOLATED_DRIVER': 'corenetwork.drivers.network_vxlan',
        'CORE_DRIVER': 'corenetwork.drivers.core_default',
        'NODE_DRIVER': 'corenetwork.drivers.node_default',
        'VM_DRIVER': 'corenetwork.drivers.vm_default',
    },

cc-node obtains list of all defined by application drivers with DriverInterface method:

drivers = DriverInterface.get_all_drivers()

Next, according to parameter, proper method is called from each driver:

driver.configure_node()

or other, related to action and role (management or node).

Proper methods from drivers are responsible to obtain proper configuration, generate new configuration at management node and configure resources.

CoreCluster Management autodiscovery

To simply network configuration, the Avahi autodiscovery is used. The CI address is obtained automatically, by each Compute Node in CoreCluster. Avahi allows to broadcast information about services available on each host in network. With this tool CoreCluster provides information about CI in whole CoreCluster network. Each node by default uses it to look for the CoreCluster Management node. Once proper information is obtained with Avahi, no further configuration of computing node is necessary.

All further actions related to communication with CI are done with the URL obtained with Avahi. Once this URL is changed in network (i.e. network failure, or redundant management is present), the Avahi could provide new address without large outage time.

The autodiscovery setting should be disabled in production environments, due to security reasons. To disable it, edit /etc/corenetwork/config.py file. In production environments it is possible to provide multiple CI endpoints in each Compute Node to provide load balancing and better failure resistance.

< Go back     Author: Maciej Nabozny Published: April 25, 2017, 5:57 a.m.