What is Orabuntu-LXC v6.0-beta AMIDE Edition ?

Update: May 14, 2019 Visit the new Orabuntu-LXC Documentation Site:

https://web sites.Google.Com/a/orabuntu-lxc.Com/documentation/

Orabuntu-LXC v6.0-beta AMIDE Edition stands for “Amazon Mult-I-host Docker Enterprise” Edition.

turnkey software program for building an entire subsequent-generation container infrastructure spanning more than one hosts,

such as LXC Linux Containers, Docker containers, VM’s, and physical hosts, all running on OpenvSwitch Software Defined Networks (SDNs),

all networked to every other, and with container friendly block gadgets garage (SCST Linux SAN) to be had for direct-attachment (if wished) to the LXC Linux packing containers with everything going for walks at bare-steel performance of community, CPU, and garage.

Orabuntu-LXC BUILDS EVERYTHING itself for the presently supported distros:

  • Oracle Linux 7.X
  • Ubuntu sixteen.04+
  • CentOS 7.X
  • Fedora 22-27 (tested on 27)
  • RedHat 7.X
  • Pop_OS 17.10+ (System76)

Orabuntu-LXC v6.0-beta

Orabuntu-LXC installer does all the following robotically:

  • Automatically detects your OS and branches to the precise build pathway
  • Deploys/Builds OpenvSwitch from supply as RPM or DEB applications
  • Builds the OpenvSwitch Network
  • Configures VLANs at the OpenvSwitch Network
  • Connects the OpenvSwitch community physical host interfaces the usage of iptables guidelines
  • Deploys/Builds LXC from supply as RPM or DEB packages
  • Creates an LXC-containerized DNS/DHCP container for the structures going for walks bind9 and isc-dhcp-server
  • Replicates the LXC-containerized DNS/DHCP container to all Orabuntu-LXC bodily GRE-connected hosts
  • Optionally shops LXC-containerized DNS/DHCP updates at Amazon S3 for replication
  • Automatically detects filesystem types which aid lxc-snapshot overlayfs for LXC-containerized DNS/DHCP
  • Updates the LXC-containerized DNS/DHCP field replicas with the modern day quarter and lease updates each x minutes.
  • Builds the LXC bins
  • Configures all OpenvSwitch switches and LXC packing containers as systemd services
  • Configures gold reproduction LXC bins (on a separate network) in keeping with your specifications
  • Creates clones of the gold reproduction LXC packing containers
  • Builds SCST Linux SAN from source code as RPM or DKMS-enabled DEB programs
  • Creates the goal, organization, and LUNs in keeping with your specs
  • Creates the multipath.Conf file and configures multipath
  • Present LUNs in three locations, inclusive of a container-friendly non-symlink location under /dev/containername
  • Present LUNs to containers without delay, best the LUNs for that container, at full naked-steel storage overall performance.

AMIDE Edition

Orabuntu-LXC does all of this and plenty more with just those easy steps:

Step 1

Make sure your Linux distribution is updated.

On Debian-circle of relatives Linuxes this is:

sudo apt-get -y update
sudo apt-get -y upgrade

On Redhat-family Linuxes this is:

sudo yum -y update

Step 2

Install manually the following packages (if on Debian-circle of relatives Linux):

sudo apt-get -y install unzip wget openssh-server net-tools bind9utils

Install manually the following programs (if on RedHat-circle of relatives Linux):

sudo yum -y install unzip wget openssh-server net-tools bind-utils

Step 3

Download the latest Orabuntu-LXC v6.0x AMIDE release to /home/username/Downloads and unzip it,

then navigate to the “anylinux” directory and run it (as a NON-root “administrative” person with “SUDO ALL” privilege or “wheel” privilege) the subsequent script:

./anylinux-services.HUB.HOST.sh

That’s it. It runs fully-automatic and grants a whole subsequent-generation LXC and Docker SDN container infrastructure.

If, alternatively, it’s miles preferred to personalize Orabuntu-LXC, it’s far quite-bendy and configurable using the parameters within the document:

anylinux-services.sh

Orabuntu

Which includes aid for any two separate person-selectable IP subnet levels, and a pair of domain names, and lots extra. One network,

as an example the “seed” community can also be used as an out-of-band preservation network, and the opposite community used for manufacturing packing containers.

With the replicated and constantly updated LXC containerized DNS/DHCP solution,

GRE-related hosts (together with developer laptops) may be disconnected from the network and still have full DNS/DHCP lookup services for any containers saved regionally at the developer laptop. Plus,

packing containers which are brought with the aid of the developer after detachment from the Orabuntu-LXC community may be added to the nearby copy of the LXC containerized DNS/DHCP.

More Detailed: Install Orabuntu-LXC v6.0-beta AMIDE

An administrative non-root consumer account is required (including the installation account). The non-root consumer desires to have “sudo ALL” privilege.

Be certain you are installing on an internet-related LAN-related host that could down load supply software program from repositories which encompass yum.Oracle.Com, archive.Ubuntu.Com, SourceForge, and many others.

On a Debian-family Linux, inclusive of Ubuntu, this would be membership inside the “sudo” institution, e.G.

orabuntu@UL-1710-S:~$ id orabuntu
uid=1001(orabuntu) gid=1001(orabuntu) groups=1001(orabuntu),27(sudo)
orabuntu@UL-1710-S:~$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=17.10
DISTRIB_CODENAME=artful
DISTRIB_DESCRIPTION="Ubuntu 17.10"
orabuntu@UL-1710-S:~$ 

On a RedHat-circle of relatives Linux, inclusive of Fedora, this will be membership inside the “wheel” group, e.G.

[orabuntu@fedora27 archives]$ id orabuntu
uid=1000(orabuntu) gid=1000(orabuntu) groups=1000(orabuntu),10(wheel)
[orabuntu@fedora27 archives]$ cat /etc/fedora-release 
Fedora release 27 (Twenty Seven)
[orabuntu@fedora27 archives]$

For Debian-circle of relatives Linuxes the subsequent script may be used to create the desired administrative set up consumer.

./orabuntu-services-0.sh

For RedHat-own family Linuxes the following script may be used to create the specified administrative deploy consumer.

./uekulele-services-0.sh

The first Orabuntu-LXC set up is continually the “HUB” host deploy.

Install the Orabuntu-LXC HUB host as shown under (if putting in an Orabuntu-LXC launch).

cd /home/username/Downloads/orabuntu-lxc-6.03-beta/anylinux
./anylinux-services.HUB.HOST.sh new

Install the Orabuntu-LXC HUB host as proven below (if putting in from the DEV branch).

cd /home/username/Downloads/orabuntu-lxc-master/anylinux
./anylinux-services.HUB.HOST.sh new

LXC v6.0-beta AMIDE

That’s all. This one command will do the following:

* Install required packages
* Install or build LXC from source 
* Install or build OpenvSwitch from source
* Build the LXC containerized DNS/DHCP 
* Detect filesystem type and use overlayfs technology if supported for LXC containerized DNS/DHCP
* Build Oracle Linux LXC containers
* Build the OpenvSwitch networks (with VLANs)
* Configure the IP subnets and domains specified in the anylinux-services.sh file
* Put the LXC containers on the OvS networks
* Build a DNS/DHCP LXC container
* Configure the containers according to specifications in the "product" subdirectory.
* Clone the number of containers specified in the anylinux-services.sh file
* Install Docker and a sample network-tools Docker container

Install Orabuntu

Note that although the software is unpacked at /home/username/Downloads, nothing is truely mounted there. The installation actuall takes region at /opt/olxc/domestic/username/Downloads which is in which the installer places all installation documents. The distribution at /home/username/Downloads remains static during the install.

The installation is custom designed and configured in the document:

anylinux-services.sh

Search for pgroup1, pgroup2, pgroup3 to peer the configurable settings. When first trying out Orabuntu-LXC,

the best approach might be to simply build a VM of one among a supported vanilla Linux distro (Oracle Linux, Ubuntu, CentOS, Fedora, or Red Hat) after which simply download and run as defined above “./anylinux-offerings.HUB.HOST.Sh new” after which after deploy examine the setup to look how the configurations in “anylinux-offerings.Sh” affect the deployment.

To upload extra bodily hosts you use

./anylinux-services.GRE.HOST.sh new

This script calls for configuring these parameters in the “anylinux-services.GRE.HOST.Sh” script.

* SPOKEIP
* HUBIP
* HubUserAct
* HubSudoPwd
* Product

Orabuntu

If you used the scripts to create an “orabuntu” consumer then HubUserAct=orabuntu and HubSudoPwd=orabuntu (or optionally the generated password). The merchandise currently to be had inside the “merchandise” listing are “oracle-db” and “workspaces” however you could create your very own product report sets and positioned them in the goods directory.

Note that the subnet degrees selected within the “anylinux-services.HUB.HOST.Sh” set up need to be used unchanged while walking the script “anylinux-services.GRE.HOST.Sh” in order that the multi-host networking works efficiently.

To put VM’s at the Orabuntu-LXC OpenvSwitch community, on either a HUB physical host, or, on a GRE physical host,

see the guide within the Orabuntu-LXC wiki which offers an instance (VirtualBox) of placing a VM at the LXC OpenvSwitch community.

To deploy Orabuntu-LXC in a VM walking at the LXC OpenvSwitch community on the HUB host use the subsequent script. In this case, Orabuntu-LXC is already installed on the phyiscal host,

a VM has been put on the LXC OpenvSwitch networks, and now Orabuntu-LXC is established within the VM.

This effects in containers which might be walking inside the VM on the LXC OpenvSwitch community,

as well as the present LXC boxes which are going for walks on the Orabuntu-LXC physical host. All of these packing containers, VM’s and physical hosts can speak to each different by default.

./anylinux-services.VM.ON.HUB.HOST.1500.sh new

To set up Orabuntu-LXC in a VM jogging on the LXC OpenvSwitch network on a GRE-connected host use the following script:

./anylinux-services.VM.ON.GRE.HOST.1420.sh new

In this case once more it’s miles vital to configure parameters in the “anylinux-services.VM.ON.GRE.HOST.1420.Sh” script:

* SPOKEIP
* HUBIP
* HubUserAct
* HubSudoPwd
* Product

To add Oracle Linux container variations (e.G. Add a few Oracle Linux 7.Three LXC containers to a deployment of Oracle Linux 6.9 LXC boxes) use both

anylinux-services.ADD.RELEASE.ON.HUB.HOST.1500.sh

or

anylinux-services.ADD.RELEASE.ON.GRE.HOST.1420.sh

Install Orabuntu

Depending once more on whether container versions are being upload on an Orabuntu-LXC HUB host, or a GRE-tunnel-connected Orabuntu-LXC host, respectively.

In this case it’s far necessary to go into anylinux-services.Sh file and edit the container model variables (MajorRelease, PointRelease) in pgroup2.

To upload more clones of an already existing version, e.G. Upload extra Oracle Linux 7.3 LXC boxes to a fixed of existing Oracle Linux 7.3 LXC packing containers, use

anylinux-services.ADD.CLONES.sh

Note that Orabuntu-LXC additionally consists of the default LXC Linux Bridge for that distro, e.G. For CentOS and Fedora

[orabuntu@fedora27 logs]$ ifconfig virbr0
virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:8b:e7:18  txqueuelen 1000  (Ethernet)
        RX packets 3189  bytes 187049 (182.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4739  bytes 28087232 (26.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[orabuntu@fedora27 logs]$ cat /etc/fedora-release 
Fedora release 27 (Twenty Seven)
[orabuntu@fedora27 logs]$ 

And for Oracle Linux, Ubuntu and Red Hat Linux:

orabuntu@UL-1710-S:~$ ifconfig lxcbr0
lxcbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.0.3.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:16:3e:00:00:00  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

orabuntu@UL-1710-S:~$ 

To be able to encompass bins aside from Oracle Linux on your deployment, use the default LXC linux bridge to add non-Orabuntu-LXC LXC containers,

and those packing containers will be able to talk to the packing containers on the OvS community proper out of the field. In this manner Ubuntu Linux LXC containers, Alpine Linux LXC boxes, etc. Can be delivered to the mix the use of the usual Linux Bridge (non-OVS).

Orabuntu,Why Oracle Linux

Why is Orabuntu-LXC built round Oracle Linux? We chose Oracle Linux because it’s miles the most effective loose downloadable with ease-available Red Hat-circle of relatives Linux subsidized through the total energy and credit score of a first-rate software program supplier,

absolutely one in all the most important, particularly Oracle Corporation.

Oracle (not like Red Hat) makes their manufacturing-grade Linux to be had for free (along with loose access to their public YUM servers) and due to the fact Oracle Linux is beneath the path of it is current Product Management Director, Avi Miller, Oracle have made big and a hit modifications to Oracle Linux to make it very box-friendly, extremely fast, and an excellent platform for box deployments of all types.

Oracle Linux explicitly supports LXC and Docker containers, and considering that the ones are the center technologies supported through Orabuntu-LXC,

we experience Oracle Linux is honestly the number 1 preference for manufacturing-grade Linux box deployments wherein a Red Hat-own family Linux is required, and we saw a want for a production-grade, industrial-power box solution constructed round a Red Hat-own family Linux backed by way of a major software supplier, and there may be actually best one credible desire that meets the ones requirements, and it’s Oracle Linux.

If you run Oracle Linux as your LXC host, and Orabuntu-LXC Oracle Linux LXC boxes, you’ve got a 100% Oracle Corporation next-generation box infrastructure answer for free of charge whether or not in improvement or in manufacturing, and, which can at any time be transformed to paid assist from Oracle Corporation, while and if the time comes for that.

Orabuntu,Docker

Orabuntu-LXC deployes docker for all of our supported structures (Fedora, CentOS, Ubuntu, Oracle Linux, Red Hat) and the docker boxes on docker0 by default may be accessed on their ports from the LXC Linux Containers, VMs, and physical hosts. This provides out of the container a mechanism to put multilayer products into LXC bins and connect them to services prodvided from Docker Containers.

Virtual Machines

VM’s can now be without delay attached to the Orabuntu-LXC OpenvSwitch VLAN networks effortlessly the usage of just the capability in for instance the Oracle VirtualBox GUI. The VMs attached to OpenvSwitch will get IP addresses on the identical subnet as the LXC packing containers and will have complete out of the field networking among the LXC packing containers going for walks at the physical host and the VMs.

But even beyond that, Orabuntu-LXC can be set up within the VM’s which might be already at the host OpenvSwitch network and the Orabuntu-LXC Linux bins in the VMs could have full out of the field networking with all the VMs, and all the physical hosts (HUB or GRE), and all of the LXC and Docker containers strolling at the bodily hosts, and all of these VMs and containers will all be in DNS and could be handy from each different thru their DNS names, with full forward and opposite research offerings provided by means of the redundant and fault-tolerant new Orabuntu-LXC DNS/DHCP LXC box replicas.

Orabuntu-LXC DNS/DHCP Replication

Version 6.Zero-beta AMIDE version consists of near actual-time replication of the LXC DNS/DHCP container this is on the OpenvSwitch networks. On the Orabuntu-LXC HUB host is the primary DNS/DHCP LXC box which affords DNS/DHCP services to all GRE-linked physical hosts, VM’s and LXC Containers, whether or not on bodily host or in VM’s.

Every Orabuntu-LXC bodily host while deployed mechanically gets a replica of the DNS/DHCP LXC box from the HUB host. This replica is hooked up inside the down country and remains down at the same time as the HUB host LXC DNS/DHCP container is walking. However, each 5 minutes (or at an interval distinctive by the person) the LXC DNS/DHCP field at the HUB host checks for any DNS/DHCP sector updates and if it unearths any, it propagates those modifications to all of the DNS/DHCP LXC field replicas on all GRE-connected Orabuntu-LXC bodily hosts.

If at any time DNS/DHCP services are needed, such as if the HUB DNS/DHCP is going down, or if a GRE-linked host needs to be indifferent from the community, the reproduction DNS/DHCP LXC box can be commenced on that local host, and could right away practice all of the today’s updates from the grasp DNS/DHCP LXC field on HUB host (the usage of the “dns-sync” carrier), and can be able to clear up DNS and provide DHCP for all GRE-linked hosts and HUB host on the network. (Be sure that best one DNS/DHCP LXC reproduction is up at any given time).

LXC DNS/DHCP Replication

A reproduction can be transformed to grasp repute without a doubt by way of copying the listing of patron GRE-connected bodily hosts to the DNS/DHCP replica, due to the fact all replicas have all scripting on board to feature as number one DNS/DHCP. This also can be beneficial if a developer computer is a GRE-replicated host a good way to provide the developer with complete DNS/DHCP while disconnected from the network for all LXC bins mounted locally at the developer computer.

This capability may be used with any HA monitoring answer such as HP Service Guard to reveal that at all times as a minimum one DNS/DHCP LXC field on the community is up and running.

OpenvSwitch

Orabuntu-LXC makes use of OpenvSwitch because it’s center switch technology. This way that all of the power of OpenvSwitch manufacturing-grade Software Defined Networking (SDN) is to be had in an Orabuntu-LXC deployment. This consists of a wealthy manufacturing-equipped switch characteristic set http://www.Openvswitch.Org/ and other high overall performance functions that may be introduced-on, together with OVS-DPDK https://software.Intel.Com/en-us/articles/open-vswitch-with-dpdk-overview.

SCST Linux SAN

The blanketed Orabuntu-LXC SCST Linux SAN deployer (scst-files.Tar) clears away the fog that has for too long surrounded SCST deployments on Ubuntu Linux. The Orabuntu-LXC SCST Linux SAN deployer installs SCST on Ubuntu Linux the usage of DKMS-enabled DEB programs, for worry-unfastened hands-off SCST performance across host kernel updates. Support for RPM primarily based distros, in addition to DEB based totally distros, is FULLY AUTOMATED from begin to complete.

Kick off the Orabuntu-LXC SCST installer and pass get a cup of espresso or jog around the block. When you come back multipath, production-prepared LUNs are looking forward to your venture, and the /and so forth/multipath.Conf document has been built for you and mounted robotically. SCST module updates after host kernel updates are dealt with transparently by DKMS technology permitting users and directors to recognition at the wealthy manufacturing-geared up function set of SCST used by lots of the most important generation, services, and hardware groups. Http://scst.Sourceforge.Net/customers.Html

WeaveWorks

Although Orabuntu-LXC presents it’s own multi-host answer, it may also be used with WeaveWorks technologies, and can be controlled from Google Cloud Platform (GCP) the usage of WeaveWorks era for web-primarily based access and management from anywhere.

Security Considerations

Orabuntu-LXC multi-host configuration does NOT require key-trade technology aka automatic key logins. Therefore, Orabuntu-LXC can be utilized in PCI-DSS environments where key-exchange isn’t accredited and can be utilized in any scenario in which no statistics in any respect can be written throughout login. Orabuntu-LXC is also extraordinary for positive forms of LDAP authentication mechanisms wherein public-key technology cannot be used. Orabuntu-LXC is potentially also beneficial for implementation on routers wherein public-key generation once more might not be an option.

The root account is NOT used for Orabuntu-LXC set up. All this is required currently is an adminstrative consumer with “SUDO ALL” privilege, and paintings is underway on the roadmap to get the minimum set of SUDO privileges defined so that now not even SUDO ALL privilege can be wished.

Interestingly, as soon as Orabuntu-LXC is hooked up, the executive SUDO ALL user used to install Orabuntu-LXC can absolutely be DELETED the usage of userdel for instance due to the fact after set up that user is now not wanted. All the GRE-tunnels and different capability of Orabuntu-LXC hold to perform usually below root authority even though the deploy consumer no longer exists. For GRE-related Orabuntu-LXC hosts, there may be a user known as “amide” which has sudo privileges, “mkdir” and “cp” that is used to handle updating the LXC containerized DNS/DHCP replicas across all the Orabuntu-LXC bodily hosts, but even this consumer may be changed with the Orabuntu-LXC integrated AmazonS3 LXC containerized DNS/DHCP replication option for secured operations.

Installer Logging

The Orabuntu-LXC installer uses the highly-sophisticated sudoreplay logging facility, which now not best logs each single sudo command, but additionally lets in actualy PLAYBACK of the installer step – no longer only a static log – but an actual VIDEO of that set up step. And sudoreplay allows speedup or slowdown of the install step video, so it is viable to review a prolonged set up step (which include constructing OpenvSwitch from supply code) speeded-up. playback includes every enter and output that the real session encountered, so the complete session is captured in all respects. And this functionality does now not require any direct edit to the sudoers record, instead it uses /and many others/sudoers.D and sets detachable parameters that may be became after off/eliminated after the set up.

To monitor the log in the course of or after an deploy, go to the “installs/logs” subdirectory of the Orabuntu-LXC release and cat or tail the basis-owned “username.Log” record.

Gilbert Standen Founder and Creator Principal Solution Architect Orabuntu-LXC St. Louis, MO March 2018 [email protected]