Rackspace Cloud Guide

    This section of the documentation is under construction. We are in the process of adding more examples about the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud in .

    Ansible contains a number of core modules for interacting with Rackspace Cloud.

    The purpose of this section is to explain how to put Ansible modules together(and use inventory scripts) to use Ansible in a Rackspace Cloud context.

    Prerequisites for using the rax modules are minimal. In addition to ansible itself,all of the modules require and are tested against pyrax 1.5 or higher.You’ll need this Python module installed on the execution host.

    pyrax is not currently available in many operating systempackage repositories, so you will likely need to install it via pip:

    Ansible creates an implicit localhost that executes in the same context as the and the other CLI tools.If for any reason you need or want to have it in your inventory you should do something like the following:

    1. [localhost]
    2. localhost ansible_connection=local ansilbe_python_interpreter=/usr/local/bin/python2

    For more information see Implicit Localhost

    In playbook steps, we’ll typically be using the following pattern:

    1. - hosts: localhost
    2. gather_facts: False
    3. tasks:

    Credentials File

    The rax.py inventory script and all rax modules support a standard pyrax credentials file that looks like:

    1. [rackspace_cloud]
    2. username = myraxusername
    3. api_key = d41d8cd98f00b204e9800998ecf8427e

    Setting the environment parameter RAX_CREDS_FILE to the path of this file will help Ansible find how to loadthis information.

    More information about this credentials file can be found at

    Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.

    There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable ‘ansible_python_interpreter’, Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on ‘localhost’, or perhaps running via ‘local_action’, are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:

    1. [localhost]
    2. localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python

    Note

    pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.

    Now for the fun parts.

    Note

    Authentication with the Rackspace-related modules is handled by eitherspecifying your username and API key as environment variables or passingthem as module arguments, or by specifying the location of a credentialsfile.

    Here is a basic example of provisioning an instance in ad-hoc mode:

    1. $ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes"

    Here’s what it would look like in a playbook, assuming the parameters were defined in variables:

    The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called “raxhosts”, with each nodes hostname, IP address, and root password being added to the inventory.

    1. - name: Add the instances we created (by public IP) to the group 'raxhosts'
    2. add_host:
    3. hostname: "{{ item.name }}"
    4. ansible_host: "{{ item.rax_accessipv4 }}"
    5. ansible_ssh_pass: "{{ item.rax_adminpass }}"
    6. groups: raxhosts
    7. loop: "{{ rax.success }}"
    8. when: rax.action == 'create'

    With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.

    1. - name: Configuration play
    2. hosts: raxhosts
    3. user: root
    4. roles:
    5. - ntp
    6. - webserver

    The method above ties the configuration of a host with the provisioning step. This isn’t always what you want, and leads usto the next section.

    Host Inventory

    Once your nodes are spun up, you’ll probably want to talk to them again. The best way to handle this is to use the “rax” inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is highly recommended in “rax” and can provide an easy way to sort between host groups and roles. If you don’t want to use the rax.py dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.

    In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.

    rax.py

    To use the rackspace dynamic inventory script, copy rax.py into your inventory directory and make it executable. You can specify a credentials file for rax.py utilizing the RAX_CREDS_FILE environment variable.

    Note

    Dynamic inventory scripts (like rax.py) are saved in /usr/share/ansible/inventory if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to .

    Note

    Users of Ansible Tower will note that dynamic inventory is natively supported by Tower, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps:

    1. $ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup

    rax.py also accepts a RAX_REGION environment variable, which can contain an individual region, or a comma separated list of regions.

    When using rax.py, you will not have a ‘localhost’ defined in the inventory.

    Executing or ansible-playbook and specifying the inventory directory insteadof an individual file, will cause ansible to evaluate each file in that directory for inventory.

    Let’s test our inventory script to see if it can talk to Rackspace Cloud.

    1. $ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup

    Assuming things are properly configured, the rax.py inventory script will output information similar to thefollowing information, which will be utilized for inventory and variables.

    1. {
    2. "ORD": [
    3. "test"
    4. ],
    5. "_meta": {
    6. "hostvars": {
    7. "test": {
    8. "ansible_host": "198.51.100.1",
    9. "rax_accessipv4": "198.51.100.1",
    10. "rax_accessipv6": "2001:DB8::2342",
    11. "rax_addresses": {
    12. "private": [
    13. {
    14. "addr": "192.0.2.2",
    15. "version": 4
    16. }
    17. ],
    18. "public": [
    19. {
    20. "addr": "198.51.100.1",
    21. "version": 4
    22. },
    23. {
    24. "addr": "2001:DB8::2342",
    25. "version": 6
    26. }
    27. ]
    28. },
    29. "rax_config_drive": "",
    30. "rax_created": "2013-11-14T20:48:22Z",
    31. "rax_flavor": {
    32. "id": "performance1-1",
    33. "links": [
    34. {
    35. "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
    36. "rel": "bookmark"
    37. }
    38. ]
    39. },
    40. "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
    41. "rax_human_id": "test",
    42. "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
    43. "rax_image": {
    44. "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
    45. "links": [
    46. {
    47. "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
    48. "rel": "bookmark"
    49. }
    50. ]
    51. },
    52. "rax_key_name": null,
    53. "rax_links": [
    54. {
    55. "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
    56. "rel": "self"
    57. },
    58. {
    59. "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
    60. "rel": "bookmark"
    61. }
    62. ],
    63. "rax_metadata": {
    64. "foo": "bar"
    65. },
    66. "rax_name": "test",
    67. "rax_name_attr": "name",
    68. "rax_networks": {
    69. "private": [
    70. "192.0.2.2"
    71. ],
    72. "public": [
    73. "198.51.100.1",
    74. "2001:DB8::2342"
    75. ]
    76. },
    77. "rax_os-dcf_diskconfig": "AUTO",
    78. "rax_os-ext-sts_power_state": 1,
    79. "rax_os-ext-sts_task_state": null,
    80. "rax_os-ext-sts_vm_state": "active",
    81. "rax_progress": 100,
    82. "rax_status": "ACTIVE",
    83. "rax_tenant_id": "111111",
    84. "rax_updated": "2013-11-14T20:49:27Z",
    85. "rax_user_id": "22222"
    86. }
    87. }
    88. }
    89. }

    When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be advantageous to retrieve discoverable hostvar information from the Rackspace API.

    This can be achieved with the rax_facts module and an inventory file similar to the following:

    1. - name: Gather info about servers
    2. hosts: test_servers
    3. gather_facts: False
    4. tasks:
    5. - name: Get facts about servers
    6. rax_facts:
    7. credentials: ~/.raxpub
    8. name: "{{ inventory_hostname }}"
    9. region: "{{ rax_region }}"
    10. delegate_to: localhost
    11. - name: Map some facts
    12. set_fact:
    13. ansible_host: "{{ rax_accessipv4 }}"

    While you don’t need to know how it works, it may be interesting to know what kind of variables are returned.

    The rax_facts module provides facts as followings, which match the rax.py inventory script:

    1. {
    2. "ansible_facts": {
    3. "rax_accessipv4": "198.51.100.1",
    4. "rax_accessipv6": "2001:DB8::2342",
    5. "rax_addresses": {
    6. "private": [
    7. {
    8. "addr": "192.0.2.2",
    9. "version": 4
    10. }
    11. ],
    12. "public": [
    13. {
    14. "addr": "198.51.100.1",
    15. "version": 4
    16. {
    17. "addr": "2001:DB8::2342",
    18. "version": 6
    19. }
    20. ]
    21. },
    22. "rax_config_drive": "",
    23. "rax_created": "2013-11-14T20:48:22Z",
    24. "rax_flavor": {
    25. "id": "performance1-1",
    26. "links": [
    27. {
    28. "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
    29. "rel": "bookmark"
    30. }
    31. ]
    32. },
    33. "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
    34. "rax_human_id": "test",
    35. "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
    36. "rax_image": {
    37. "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
    38. "links": [
    39. {
    40. "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
    41. "rel": "bookmark"
    42. }
    43. ]
    44. },
    45. "rax_key_name": null,
    46. "rax_links": [
    47. {
    48. "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
    49. "rel": "self"
    50. },
    51. {
    52. "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
    53. "rel": "bookmark"
    54. }
    55. ],
    56. "rax_metadata": {
    57. "foo": "bar"
    58. },
    59. "rax_name": "test",
    60. "rax_name_attr": "name",
    61. "rax_networks": {
    62. "private": [
    63. "192.0.2.2"
    64. ],
    65. "public": [
    66. "198.51.100.1",
    67. "2001:DB8::2342"
    68. ]
    69. },
    70. "rax_os-dcf_diskconfig": "AUTO",
    71. "rax_os-ext-sts_power_state": 1,
    72. "rax_os-ext-sts_task_state": null,
    73. "rax_os-ext-sts_vm_state": "active",
    74. "rax_progress": 100,
    75. "rax_status": "ACTIVE",
    76. "rax_tenant_id": "111111",
    77. "rax_updated": "2013-11-14T20:49:27Z",
    78. "rax_user_id": "22222"
    79. },
    80. "changed": false
    81. }

    This section covers some additional usage examples built around a specific use case.

    Network and Server

    Create an isolated cloud network and build a server

    1. - name: Build Servers on an Isolated Network
    2. hosts: localhost
    3. gather_facts: False
    4. tasks:
    5. - name: Network create request
    6. rax_network:
    7. credentials: ~/.raxpub
    8. label: my-net
    9. cidr: 192.168.3.0/24
    10. region: IAD
    11. state: present
    12. delegate_to: localhost
    13.  
    14. - name: Server create request
    15. rax:
    16. credentials: ~/.raxpub
    17. name: web%04d.example.org
    18. flavor: 2
    19. image: ubuntu-1204-lts-precise-pangolin
    20. disk_config: manual
    21. networks:
    22. - public
    23. - my-net
    24. region: IAD
    25. state: present
    26. count: 5
    27. exact_count: yes
    28. group: web
    29. wait: yes
    30. wait_timeout: 360
    31. register: rax
    32. delegate_to: localhost

    Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html

    1. ---
    2. - name: Build environment
    3. hosts: localhost
    4. gather_facts: False
    5. tasks:
    6. - name: Load Balancer create request
    7. rax_clb:
    8. credentials: ~/.raxpub
    9. name: my-lb
    10. port: 80
    11. protocol: HTTP
    12. algorithm: ROUND_ROBIN
    13. type: PUBLIC
    14. timeout: 30
    15. region: IAD
    16. wait: yes
    17. state: present
    18. meta:
    19. app: my-cool-app
    20. register: clb
    21.  
    22. - name: Network create request
    23. rax_network:
    24. credentials: ~/.raxpub
    25. label: my-net
    26. cidr: 192.168.3.0/24
    27. state: present
    28. region: IAD
    29. register: network
    30.  
    31. - name: Server create request
    32. rax:
    33. credentials: ~/.raxpub
    34. name: web%04d.example.org
    35. flavor: performance1-1
    36. image: ubuntu-1204-lts-precise-pangolin
    37. disk_config: manual
    38. networks:
    39. - public
    40. - private
    41. - my-net
    42. region: IAD
    43. state: present
    44. count: 5
    45. exact_count: yes
    46. group: web
    47. wait: yes
    48. register: rax
    49.  
    50. - name: Add servers to web host group
    51. add_host:
    52. hostname: "{{ item.name }}"
    53. ansible_host: "{{ item.rax_accessipv4 }}"
    54. ansible_ssh_pass: "{{ item.rax_adminpass }}"
    55. ansible_user: root
    56. groups: web
    57. loop: "{{ rax.success }}"
    58. when: rax.action == 'create'
    59.  
    60. - name: Add servers to Load balancer
    61. rax_clb_nodes:
    62. credentials: ~/.raxpub
    63. load_balancer_id: "{{ clb.balancer.id }}"
    64. address: "{{ item.rax_networks.private|first }}"
    65. port: 80
    66. condition: enabled
    67. wait: yes
    68. region: IAD
    69. loop: "{{ rax.success }}"
    70. when: rax.action == 'create'
    71.  
    72. - name: Configure servers
    73. hosts: web
    74. handlers:
    75. - name: restart nginx
    76. service: name=nginx state=restarted
    77.  
    78. tasks:
    79. - name: Install nginx
    80. apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
    81. notify:
    82. - restart nginx
    83.  
    84. - name: Ensure nginx starts on boot
    85. service: name=nginx state=started enabled=yes
    86.  
    87. - name: Create custom index.html
    88. copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
    89. owner=root group=root mode=0644

    RackConnect and Managed Cloud

    When using RackConnect version 2 or Rackspace Managed Cloud there are Rackspace automation tasks that are executed on the servers you create after they are successfully built. If your automation executes before the RackConnect or Managed Cloud automation, you can cause failures and un-usable servers.

    These examples show creating servers, and ensuring that the Rackspace automation has completed before Ansible continues onwards.

    For simplicity, these examples are joined, however both are only needed when using RackConnect. When only using Managed Cloud, the RackConnect portion can be ignored.

    The RackConnect portions only apply to RackConnect version 2.

    Using a Control Machine

    1. - name: Create an exact count of servers
    2. hosts: localhost
    3. gather_facts: False
    4. tasks:
    5. - name: Server build requests
    6. rax:
    7. credentials: ~/.raxpub
    8. name: web%03d.example.org
    9. flavor: performance1-1
    10. image: ubuntu-1204-lts-precise-pangolin
    11. disk_config: manual
    12. region: DFW
    13. state: present
    14. count: 1
    15. exact_count: yes
    16. group: web
    17. wait: yes
    18. register: rax
    19.  
    20. - name: Add servers to in memory groups
    21. add_host:
    22. hostname: "{{ item.name }}"
    23. ansible_host: "{{ item.rax_accessipv4 }}"
    24. ansible_ssh_pass: "{{ item.rax_adminpass }}"
    25. ansible_user: root
    26. rax_id: "{{ item.rax_id }}"
    27. groups: web,new_web
    28. loop: "{{ rax.success }}"
    29. when: rax.action == 'create'
    30.  
    31. - name: Wait for rackconnect and managed cloud automation to complete
    32. hosts: new_web
    33. gather_facts: false
    34. tasks:
    35. - name: ensure we run all tasks from localhost
    36. delegate_to: localhost
    37. block:
    38. - name: Wait for rackconnnect automation to complete
    39. rax_facts:
    40. credentials: ~/.raxpub
    41. id: "{{ rax_id }}"
    42. region: DFW
    43. register: rax_facts
    44. until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
    45. retries: 30
    46. delay: 10
    47.  
    48. - name: Wait for managed cloud automation to complete
    49. rax_facts:
    50. credentials: ~/.raxpub
    51. id: "{{ rax_id }}"
    52. region: DFW
    53. register: rax_facts
    54. until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
    55. retries: 30
    56. delay: 10
    57.  
    58. - name: Update new_web hosts with IP that RackConnect assigns
    59. hosts: new_web
    60. gather_facts: false
    61. tasks:
    62. - name: Get facts about servers
    63. rax_facts:
    64. name: "{{ inventory_hostname }}"
    65. region: DFW
    66. delegate_to: localhost
    67. - name: Map some facts
    68. set_fact:
    69. ansible_host: "{{ rax_accessipv4 }}"
    70.  
    71. - name: Base Configure Servers
    72. hosts: web
    73. roles:
    74. - role: users
    75.  
    76. - role: openssh
    77. opensshd_PermitRootLogin: "no"
    78.  
    79. - role: ntp

    Using Ansible Pull

    Using Ansible Pull with XenStore

    1. ---
    2. - name: Ensure Rackconnect and Managed Cloud Automation is complete
    3. hosts: all
    4. tasks:
    5. - name: Check for completed bootstrap
    6. stat:
    7. path: /etc/bootstrap_complete
    8. register: bootstrap
    9.  
    10. - name: Wait for rackconnect_automation_status xenstore key to exist
    11. command: xenstore-exists vm-data/user-metadata/rackconnect_automation_status
    12. register: rcas_exists
    13. when: bootstrap.stat.exists != True
    14. failed_when: rcas_exists.rc|int > 1
    15. until: rcas_exists.rc|int == 0
    16. retries: 30
    17. delay: 10
    18.  
    19. - name: Wait for rackconnect automation to complete
    20. command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
    21. register: rcas
    22. when: bootstrap.stat.exists != True
    23. until: rcas.stdout|replace('"', '') == 'DEPLOYED'
    24. retries: 30
    25. delay: 10
    26.  
    27. - name: Wait for rax_service_level_automation xenstore key to exist
    28. command: xenstore-exists vm-data/user-metadata/rax_service_level_automation
    29. register: rsla_exists
    30. when: bootstrap.stat.exists != True
    31. failed_when: rsla_exists.rc|int > 1
    32. until: rsla_exists.rc|int == 0
    33. retries: 30
    34. delay: 10
    35.  
    36. - name: Wait for managed cloud automation to complete
    37. command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
    38. register: rsla
    39. when: bootstrap.stat.exists != True
    40. until: rsla.stdout|replace('"', '') == 'DEPLOYED'
    41. retries: 30
    42. delay: 10
    43.  
    44. - name: Set bootstrap completed
    45. file:
    46. path: /etc/bootstrap_complete
    47. state: touch
    48. owner: root
    49. group: root
    50. mode: 0400
    51.  
    52. - name: Base Configure Servers
    53. hosts: all
    54. roles:
    55. - role: users
    56.  
    57. - role: openssh
    58. opensshd_PermitRootLogin: "no"
    59.  
    60. - role: ntp

    Advanced Usage

    also contains a very nice feature for auto-scaling use cases.In this mode, a simple curl script can call a defined URL and the server will “dial out” to the requesterand configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes.See the Tower documentation for more details.

    A benefit of using the callback in Tower over pull mode is that job results are still centrally recordedand less information has to be shared with remote hosts.

    Orchestration in the Rackspace Cloud

    • Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
    • Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
    • Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively