Preparing to Install Memory Machine

Prepare to install Memory Machine on a standalone data host or on a combined data/management host by doing the following: Verify that the host meets recommended hardware and OS requirements; has persistent memory configured correctly; has necessary software libraries and utilities installed; and has kernel parameters set to support how you plan to use the host.

About this task

Install Memory Machine on a standalone data host or on a combined data/management host by following the steps in this procedure and in Installing Memory Machine.
Note: You must install the following:
  • Memory Machine Data Service on all data hosts
  • Memory Machine Management Service on one combined data/management host

Procedure

  1. Verify that all data hosts meet the following requirements. MemVerge recommends all data hosts have these resources at a minimum:
    • Processor: One or more second generation Intel Xeon Scalable Processors (formerly Cascade Lake), with Intel C620 Series Chipset or later
    • DRAM: 4 × 16GB RDIMM, 2666 MHz
    • Persistent Memory (PMEM): Intel Optane Persistent Memory 100 Series, 4 × 128GB DIMMs
    You must run the Data Service on a supported Linux operating system:
    • Red Hat Enterprise Linux (RHEL) release 7 or 8
    • CentOS release 7 or 8
    • Ubuntu release 20.0
    For more information about Data Service hardware requirements, see Hardware Recommendations in the Memory Machine User Guide.
    For more information about supported OS versions, see Operating System Requirements in the Memory Machine User Guide.
  2. If you are installing Management Service on this host and you plan to use a database other than the default local-host database, ensure that this host meets the minimum requirements for the database you plan to use (see Understanding the Memory Machine Database in the Memory Machine User Guide).
  3. Ensure that the PMem utilities ipmctl and ndctl are installed on the host:
    # whereis ipmctl
    ipmctl: /usr/bin/ipmctl /usr/share/ipmctl /usr/share/man/man1/ipmctl.1.gz
    # whereis ndctl
    ndctl: /usr/bin/ndctl /etc/ndctl /usr/share/man/man1/ndctl.1.gz
    #
  4. If the ipmctl or ndctl utilities are absent, install the packages as follows, depending on the Linux distribution running on the host:
    • For CentOS or RHEL 7 or 8, use the yum package manager:

      # yum install ipmctl ndctl
      Updating Subscription Management repositories.
      ...
      Dependencies resolved.
      ===================================================================
       Package     Architecture   Version              Repository  Size
      ===================================================================
      Installing:
       ipmctl      x86_64          02.00.00.3885-1.el8 epel         91 k
       ndctl       x86_64          71.1-2.el8          baseos      189 k
      Installing dependencies:
       daxctl-libs x86_64          71.1-2.el8          baseos        41 k
       libipmctl   x86_64          02.00.00.3885-1.el8 epel         498 k
       ndctl-libs  x86_64          71.1-2.el8          baseos        78 k
      
      Transaction Summary
      ===================================================================
      Install  5 Packages
      
      Total download size: 898 k
      Installed size: 3.1 M

      The package manager prompts you to continue. Answer y. A list of packages (including dependencies, if required) are downloaded, installed, and verified. (The following example has been abbreviated. You will see the final Installed and Complete! messages if the installation is successful.)

      Is this ok [y/N]: y
      
      Downloading Packages:
      
      ...
      
      Installed products updated.
      
      Installed:
        daxctl-libs-71.1-2.el8.x86_64       ipmctl-02.00.00.3885-1.el8.x86_64       libipmctl-02.00.00.3885-1.el8.x86_64       ndctl-71.1-2.el8.x86_64
        ndctl-libs-71.1-2.el8.x86_64
      
      Complete!
      #
    • For Ubuntu Release 20, use the apt package manager (example output is abbreviated):

      # apt install ipmctl ndctl
      
      Reading package lists... 0%
      Reading package lists... 100%
      Reading package lists... Done
      
      Building dependency tree... 0%
      ...
      Reading state information... Done
      
      The following NEW packages will be installed:
      
        ipmctl ndctl
      0 upgraded, 2 newly installed, 0 to remove and 43 not upgraded.
      Need to get 0 B/226 kB of archives.
      After this operation, 460 kB of additional disk space will be used.
      
      Selecting previously unselected package ndctl.
      (Reading database ... 
      (Reading database ... 5%
      ...
      (Reading database ... 100%
      (Reading database ... 71629 files and directories currently installed.)
      Preparing to unpack .../archives/ndctl_67-1_amd64.deb ...
      Progress: [  0%] [.......................................................................................................................................................................] 
      ...
      Progress: [ 78%] [#################################################################################################################################......................................] Progress: [ 89%] [####################################################################################################################################################...................] 
      Processing triggers for man-db (2.9.1-1) ...
      Processing triggers for systemd (245.4-4ubuntu3.11) ...
      
      # 
  5. Confirm that the PMem DIMMs are in a healthy state as follows:
    # ipmctl show -dimm
    
     DimmID | Capacity  | HealthState | ActionRequired | LockState | FWVersion
    ==============================================================================
     0x0001 | 126.4 GiB | Healthy     | 0              | Disabled  | 01.02.00.5310
     0x0002 | 126.4 GiB | Healthy     | 0              | Disabled  | 01.02.00.5310
    # 
  6. Determine what mode the PMem DIMMs are in as follows.
    If the output shows that PersistentMemoryType is AppDirect, skip to step 9.
                            
    # ipmctl show -region
                            
                            
    SocketID | ISetID | PersistentMemoryType | Capacity | FreeCapacity | HealthState 
    ================================================================================ 
    0x0000 | 0x03ea7f48902a2ccc | AppDirect | 756.000 GiB | 0.000 GiB | Healthy 
    0x0001 | 0xbad27f48f5262ccc | AppDirect | 756.000 GiB | 0.000 GiB | Healthy
  7. If the PersistentMemoryType is not AppDirect, clear the configuration.
    Warning: Provisioning or changing modes may result in data loss. Existing data on the PMem DIMMs should be backed up to other storage before executing this command.
    1. Use the ndctl destroy-namespace command to clear the configuration:
      # ndctl destroy-namespace -f all
      #
    2. Reboot the host:
      # reboot
      #
    For a brief explanation of goals, regions and namespaces, see PMem Terminology.
  8. Create a new goal configuration.
    1. Create a goal with the following command:
      # ipmctl create -goal PersistentMemoryType=AppDirect
      #
    2. Reboot the host:
      # reboot
      #
  9. Create namespaces as follows. Determine the region ID by looking at the last four digits (after "0x") of the SocketID displayed in the output to the ipmctl show -region command issued in step 6 (reproduced here):
    ipmctl show -region
    SocketID | ISetID | PersistentMemoryType | Capacity | FreeCapacity | HealthState 
    ================================================================================ 
    0x0000 | 0x03ea7f48902a2ccc | AppDirect | 756.000 GiB | 0.000 GiB | Healthy 
    0x0001 | 0xbad27f48f5262ccc | AppDirect | 756.000 GiB | 0.000 GiB | Healthy
    Ignore leading zeros. Append the region ID to "region" to generate the parameter for the -r option in the ndctl create-namespace command, as shown here:
    # ndctl create-namespace -r region0 -m devdax
    # ndctl create-namespace -r region1 -m devdax
    #
  10. Use ndctl list to verify that the devdax devices have been created, as follows:
    # ndctl list 
    [
      { 
        "dev":"namespace1.0", 
        "mode":"devdax", 
        "map":"dev", 
        "size":799063146496, 
        "uuid":"df52adaa-0591-4579-a5ec-4443b421d04e", 
        "chardev":"dax1.0", 
        "align":2097152
      }, 
      { 
        "dev":"namespace0.0", 
        "mode":"devdax", 
        "map":"dev", 
        "size":799063146496, 
        "uuid":"b6ed68a3-c8b5-4540-8545-271b4ef6667a", 
        "chardev":"dax0.0", 
        "align":2097152 
      } 
    ]
    #
  11. Change the kernel max_map_count from the default to 6.3M by adding the following line to the /etc/sysctl.conf file:
    vm.max_map_count = 6300000
  12. Calculate the number of HugePages that the host will need.
    If you plan to use most of the server's memory to run Memory Machine-managed applications, create sufficiently many HugePages for 60% of the server's DRAM. (The OS requires about 10% of the DRAM to run. To be safe, another 30% is set aside for other applications on this host.) HugePages are 2 MB each.
    For example: If the system has 64 GB of DRAM, the DRAM capacity needed for HugePages is:
    64 × 0.6 = 38.4 GB.
    So allocate
    57.6 GB ÷ 2 MB = 19200 HugePages.
  13. Configure the number of HugePages by adding the following line to the /etc/sysctl.conf file:
    vm.nr_hugepages=num_huge_pages
    where num_huge_pages is the number you calculated in the previous step.
    For more on setting HugePages in Memory Machine, see Configuring HugePages in the Memory Machine User Guide.
  14. To set vm.max_map_count and vm.nr_hugepages to their new values, run the following command (or equivalently, reboot the system):
    # sysctl -p

What to do next

Continue to Installing Memory Machine.