Raid Fumigator Directions, Hunker
Raid Fumigator Directions
- 1 Raid Fumigator Directions
- 2 Preparation
- 3 Step 1
- 4 Step 2
- 5 Step 3
- 6 Step 4
- 7 Fumigation
- 8 Step 1
- 9 Step 2
- 10 Step 3
- 11 Step 4
- 12 After Fumigation
- 13 Step 1
- 14 Step 2
- 15 Step 3
- 16 Set Up a RAID-Ready System
- 17 Tips to rebuild a RAID 5 array without data loss
- 18 RAID 5 is rebuilt, but data is lost?
- 19 Ubuntu Wiki
- 20 Software Raid
- 21 FakeRaid
- 22 Mixing Software Raid
- 23 Hardware Raid
- 24 Configure Software RAID on Linux
- 25 Attaching data disks
- 26 Install the mdadm utility
- 27 Create the disk partitions
- 28 Create the RAID array
- 29 Add the new file system to /etc/fstab
- 30 TRIM/UNMAP support
- 31 StrategicCoffee
- 32 How to use a RAID log
The Raid Fumigator penetrates cracks, crevices and beneath and behind appliances to kill 18 kinds of insects including roaches, spiders, flies and fleas. The Fumigator uses chemical reaction to create a dry fog that spreads and penetrates more efficiently that aerosol-type foggers without leaving surfaces oily or sticky with residue. One can of fumigator treats a room up to 16 feet by 20 feet.
Open cabinets, closets, drawers and cupboards in the room being treated.
Remove or cover all exposed food, food preparation utensils, dishes and surfaces where food is prepared.
Take pets and house plants out of the room. Remove or cover any fish tanks. Turn off air flow pumps in aquariums.
Shut outside doors and windows. Turn off air conditioners and fans. Temporarily disable your smoke alarms.
Grasp the foil tab on the plastic cup containing the fumigator can and peel off the foil. Take the metal can out of the cup.
Fill the cup with water up to the line inside it. Put the cup on the floor in the center of the room.
Place the metal fumigator can in the cup. Fumigation will begin in one to two minutes.
Leave the room and keep all doors and windows closed for three hours.
Open all doors and windows.
Allow the room to ventilate for 30 minutes before reoccupying.
Reactivate all smoke alarms. Turn on aquarium air pumps.
Set Up a RAID-Ready System
Content Type Install & Setup
Article ID 000005803
Last Reviewed 02/03/2020
To set up a RAID-ready system, make sure the system includes a single Serial ATA hard drive and follow the procedures below:
Enable RAID in system BIOS
Follow these steps to enable RAID in the system BIOS:
Depending on your Intel Desktop Board model, enable RAID by following either of the steps below.
- Press F2 after the Power-On-Self-Test (POST) memory test begins.
- Select the Configuration menu, then the SATA Drives menu.
- Set the Chipset SATA Mode to RAID.
- Press F10 to save the BIOS settings and exit the BIOS Setup program.
- Press F2 after the Power-On-Self-Test (POST) memory test begins.
- Select the Advanced menu, then the Drive Configuration menu.
- Set the Drive Mode option to Enhanced.
- Enable Intel® RAID Technology.
- Press F10 to save the BIOS settings and exit the BIOS Setup program.
Load RAID driver using F6 installation method
Follow these steps to install the Intel® Rapid Storage Technology driver during operating system setup:
|Note||You do not need to use the F6 installation method on Windows Vista* and Windows 7*.|
Press F6 when prompted by the message:
Press F6 if you need to install a third party SCSI or RAID driver
This message displays during the text-mode phase at the beginning of Windows XP* setup.
|Note||Nothing happens right after pressing F6. Setup is still loading drivers. Watch for the prompt to load support for mass storage devices.|
Press S to Specify Additional Device.
Insert the support disk when prompted by the message:
Please insert the disk labeled Manufacturer-supplied hardware support disk into Drive A:
The disk includes the following files: IAAHCI.INF, IAAHCI.CAT, IASTOR.INF, IASTOR.CAT, IASTOR.SYS, and TXTSETUP.OEM
Use the up and down arrow keys to select your controller from the list of available SCSI adapters, which might include:
- Intel® 82801ER SATA RAID Controller
- Intel® 82801FR SATA RAID Controller
- Intel® 82801GR/GH SATA RAID Controller
- Intel® 82801GHM SATA RAID Controller
- Intel® ESB2 SATA RAID Controller
- Intel® 82801R SATA RAID Controller
Press Enter to confirm and continue.
The drivers are now installed. Leave the disk in the drive as Windows setup copies the files from the disk to the Windows installation folders. When the copy process is complete, remove the disk. Windows setup is ready to reboot.
Install Intel Rapid Storage Technology user interface
Follow these steps to install the Intel Rapid Storage Technology user interface.
How to rebuild RAID 5 without losing your data
In this article you will find out:
- when you need to rebuild RAID 5 array
- how to perform rebuilding of a RAID 5 array and not to lose any data
- what the software is the best for RAID 5 data recovery
Are you ready? Let’s read!
When you need to rebuild a RAID 5 array and why you need to back up
Rebuilding a RAID 5 array is necessary in cases where one disk of the RAID 5 array stops working, even if it does so temporarily. Of course, that is only one drive; you can work well and not notice any kind of data loss.
If RAID 5 works in that worsened condition, when other disks «replace» and perform tasks instead of the defective drive, that means they will deteriorate faster. And it is only a matter of time before the second drive will fail. Also, it should be mentioned, that for RAID array using similar disks, if one disk has failed already, then other disks will follow it soon.
If RAID 5 is not regularly monitored and tested, and the defective drive not replaced, there is a very high probability of another disk crash.
In this case, RAID 5 data recovery will be pretty hard and almost impossible. All you have to do to avoid RAID 5 death is replace the failed disk itself.
If you have a backup of your data, that’s great. Even if two RAID disks go down, you have nothing to worry about.
If you do not have any backup, be sure to use RAID Recovery software to recover data from a damaged RAID disk. Because after rebuilding a RAID 5 array, the data will be overwritten; i.e., it will disappear forever.
Tips to rebuild a RAID 5 array without data loss
So, how do you not lose any data in the rebuilding process?
Here is what is recommended to do before you rebuild a RAID 5 array:
- The main tip is: do not create a new RAID array on old drives! That will destroy the newly-created RAID and all your previous data.
- Image is important! Before you rebuild a RAID 5 array, create a RAID structure image, as well as a backup on a separate volume. These actions will secure your data immediately before restructuring.
- Save the backup twice. To be extremely confident in data integrity, test your backup with multiple restorations. And it is best to do it in different physical places.
What is not recommended to do before you rebuild a RAID 5 without losing data:
- Until the data is restored, you should be careful in all your actions: do not create, copy, move, add, delete or save any files on the disk; do not open bulky programs and applications. All this will lead to overwriting data on a damaged disk.
- Never use CHKDSK or FSCK before creating a backup. Otherwise, the file system will be fixed and delete or re-corrupt all data on the disk.
RAID 5 is rebuilt, but data is lost?
If you have used the tips described above, you should not have any lost data. But what if your data is gone already; is it too late?
Not always. You can recover data as much as possible using RAID Recovery software. It has a free trial version with all the features of a licensed one. And only after you make sure that your data is recoverable, do you need to pay a license for saving it.
How do you perform RAID 5 data recovery?
Follow these instructions:
- 1. Download and install Diskinternals RAID Recovery. The damaged disk array connects to the computer as independent local drives.
- 2. Open the RAID Wizard and select RAID members.
- 3. Use Reader or Uneraser mode to open the logical disk (it is contained in the hard drive section of the RAID disk).
- 4. After scanning, which may take a while, you will be able to preview recovered files.
- 5. To save recovered data, follow the wizard. Save your recovered files somewhere else, not the original place.
In case you are recovering lost files using a RAID disk image:
- 1. Connect the defective disk array as independent local drives to your computer.
- 2. Run Diskinternals RAID Recoveryв„ў. Perform these actions: Click «Disks» -> «Mount image».
RAID is an acronym for Redundant Array of Independent Disks. It uses multiple hard disks storing the same data to protect against some degree of physical disk failure. The amount of protection it affords depends upon the type of raid used.
The supported, and probably optimal, way to use raid with Ubuntu is to employ Linux’s Multiple Device (md) raid system, optionally with the Logical Volume Manager (LVM).
In Breezy Badger (5.10), installation of md and LVM can be completed entirely with the installation CD without using expert mode.
This is the simplest method of setting up software raid. It uses one raid partition per operating system partition, unlike LVM which uses logical volumes in order to use a single raid partition. For each disk in the array create a «Use as: Raid physical» partition of appropriate size for each operating system partition (/boot, /, swap, /home, etc.). The /boot or / partitions on each disk should be marked bootable. Use configure raid to make raid devices for each partition. On each of these raid devices configure a single operating system partition (/boot, /, swap, etc.) of the size of the entire device. Continue installing Ubuntu.
Note for RAID1 first timers: Each RAID array is usually made up of at least two ‘active’ devices (one of which is actually ‘active’ and one of which is the mirror) — spare devices are there for when one of the active devices fail so they can jump in and continue mirroring. The same principle applies if you have one true ‘active’ and multiple mirrors etc.
md made super simple
If you want to make a RAID array of data devices and not of boot devices, adapt the following recipe to your needs.
Goal: Create a RAID 1 array for user data, that is, for home directories.
Setup: Ubuntu 6.06 LTS (Dapper server) with one (1) 40GB root partition (/dev/hda), currently holding all data including home and two (2) unused 250GB hard drives (/dev/hdc, /dev/hdd).
The «super simple md» recipe
All commands, e.g., mke2fs, have sensible defaults and «do the right thing».
Obs.: if you have just one of the «data HDs» at hand but intend to buy one more later, you can build the array using:
# mdadm /dev/md0 --create -l 1 -n 2 /dev/hdc missing where «missing» will be the place of the new disk.
- Didn’t test with —auto option (as on the original super simple recipe above).
# mdadm /dev/md0 --add /dev/hdd will add the new disk later (be aware of the sizes, when buying).
- It’s worth to say that some people (including me) had «mdadm: /dev/hdd1 not large enough to join array» error when using the standard procedure, that is: partitioning the disk and add the partition to the array.
- You may need to edit /etc/fstab to remove redundant entries, e.g., if either /dev/hdac or /dev/hdad was used for something else before the pair of hard drives was installed.
- This recipe creates ext2 partitions on the drives being raided; if you want ext3, you will need to at least use the «-j» option to mke2fs, and you will need to modify the /etc/fstab entry appropriately. Some people reported errors with journaled file systems (at least ext3 and reiserfs) under RAID intensive use.
- Or you can use mkfs.ext3 instead of mke2fs to make the file system ext3, and as above change the fstab entry from ext2 to ext3
- Various man pages reference man pages that either no longer exist or are not installed by default. E.g., the md man page refers to mkraid(8).
- You can use gparted (sudo gparted) to find the hard drive devices (mine were sdb and sdc) as well as format the disks in a graphical interface if you wish. (You may need to sudo apt-get install gparted if it is not installed)
- I use a desktop install and had to sudo apt-get install mdadm before this would work.
- ^D is control-d (ctrl-D).
- If you have a SATA disk (most as of semptember 2010) you’ll see «sdX» instead of «hdX» devices (e.g.: /dev/sda instead of /dev/hda).
Installation/LVMOnRaid Setup using both LVM and md. The LVM setup didn’t work for me.
Installation/RAID1 an older description for Warty Warthog.
Most, in not all, of the so called «raid controllers» installed on motherboards are actually just hard drive controllers with a few extra features to make it easy to implement software raid drivers. These are highly non-standard. Each chipset uses different on-disk formats and different drivers. These systems are not extremely desirable for use with Ubuntu; the completely software raid described above is better. They are primarily of interest when compatibility with another existing system that employs them is required.
Device mapper raid can be used to access many of these volumes. It is provided by the dmraid package. dmraid is in the Universe repository.
After installing dmraid you can run the command dmraid -r to list the devices and raid volumes on your system. dmraid makes a device file for each volume and partition; these can be found in the /dev/mapper/ directory, and can be mounted and otherwise manipulated like normal block devices. Other options of the dmraid program are used to administer the array.
It is not advisable to install Ubuntu onto disks managed by a fake raid system; it is extremely difficult and the results will be disappointing compared to Linux’s LVM and md software raid system. If you really must do it to install Ubuntu on the same raid array as an existing installation of another operating system see the following:
Mixing Software Raid
Ubuntu can be installed on its own raid array on a computer that is using FakeRaid for another operating system on another array. There are a few steps that need to be followed for this to work:
Identify which drives are the existing operating system. You can do this by booting the Ubuntu Live CD, enabling the universe repository, installing dmraid, mounting the partitions, and poking around. You can see which devices were which mapped block device by running dmraid -r .
- root (hd2,0)
- install (hd2)
- root (hd3,0)
You can restore the boot record of a stepped on drive of another raid array in your system if you have a backup of the drive’s master boot record. For some bootloaders and configurations, such as NT loader on RAID1, the master boot record of the other drive in the array can be used as the backup.
Real hardware raid systems are very rare and are almost always provided by a card such as a PCI card. Your hardware will need kernel level support in order to work with Ubuntu. You can find out if is is supported without much work by booting a Live CD. Your array should be visible as a scsi block device and if it has existing partitions and file systems, mountable.
Configure Software RAID on Linux
It’s a common scenario to use software RAID on Linux virtual machines in Azure to present multiple attached data disks as a single RAID device. Typically this can be used to improve performance and allow for improved throughput compared to using just a single disk.
Attaching data disks
Two or more empty data disks are needed to configure a RAID device. The primary reason for creating a RAID device is to improve performance of your disk IO. Based on your IO needs, you can choose to attach disks that are stored in our Standard Storage, with up to 500 IO/ps per disk or our Premium storage with up to 5000 IO/ps per disk. This article does not go into detail on how to provision and attach data disks to a Linux virtual machine. See the Microsoft Azure article attach a disk for detailed instructions on how to attach an empty data disk to a Linux virtual machine on Azure.
Do not mix disks of different sizes, doing so would result in performance of the raidset to be limited to that of the slowest disk.
Install the mdadm utility
CentOS & Oracle Linux
SLES and openSUSE
Create the disk partitions
In this example, we create a single disk partition on /dev/sdc. The new disk partition will be called /dev/sdc1.
Start fdisk to begin creating partitions
Press ‘n’ at the prompt to create a new partition:
Next, press ‘p’ to create a primary partition:
Press ‘1’ to select partition number 1:
Select the starting point of the new partition, or press to accept the default to place the partition at the beginning of the free space on the drive:
Select the size of the partition, for example type ‘+10G’ to create a 10 gigabyte partition. Or, press create a single partition that spans the entire drive:
Next, change the ID and type of the partition from the default ID ’83’ (Linux) to ID ‘fd’ (Linux raid auto):
Finally, write the partition table to the drive and exit fdisk:
Create the RAID array
The following example will «stripe» (RAID level 0) three partitions located on three separate data disks (sdc1, sdd1, sde1). After running this command a new RAID device called /dev/md127 is created. Also note that if these data disks we previously part of another defunct RAID array it may be necessary to add the —force parameter to the mdadm command:
Create the file system on the new RAID device
CentOS, Oracle Linux, SLES 12, openSUSE, and Ubuntu
SLES 11 — enable boot.md and create mdadm.conf
A reboot may be required after making these changes on SUSE systems. This step is not required on SLES 12.
Add the new file system to /etc/fstab
Improperly editing the /etc/fstab file could result in an unbootable system. If unsure, refer to the distribution’s documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file is created before editing.
Create the desired mount point for your new file system, for example:
When editing /etc/fstab, the UUID should be used to reference the file system rather than the device name. Use the blkid utility to determine the UUID for the new file system:
Open /etc/fstab in a text editor and add an entry for the new file system, for example:
Or on SLES 11:
Then, save and close /etc/fstab.
Test that the /etc/fstab entry is correct:
If this command results in an error message, please check the syntax in the /etc/fstab file.
Next run the mount command to ensure the file system is mounted:
(Optional) Failsafe Boot Parameters
Many distributions include either the nobootwait or nofail mount parameters that may be added to the /etc/fstab file. These parameters allow for failures when mounting a particular file system and allow the Linux system to continue to boot even if it is unable to properly mount the RAID file system. Refer to your distribution’s documentation for more information on these parameters.
Linux boot parameters
In addition to the above parameters, the kernel parameter » bootdegraded=true » can allow the system to boot even if the RAID is perceived as damaged or degraded, for example if a data drive is inadvertently removed from the virtual machine. By default this could also result in a non-bootable system.
Please refer to your distribution’s documentation on how to properly edit kernel parameters. For example, in many distributions (CentOS, Oracle Linux, SLES 11) these parameters may be added manually to the » /boot/grub/menu.lst » file. On Ubuntu this parameter can be added to the GRUB_CMDLINE_LINUX_DEFAULT variable on «/etc/default/grub».
Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. These operations are primarily useful in standard storage to inform Azure that deleted pages are no longer valid and can be discarded. Discarding pages can save cost if you create large files and then delete them.
RAID may not issue discard commands if the chunk size for the array is set to less than the default (512KB). This is because the unmap granularity on the Host is also 512KB. If you modified the array’s chunk size via mdadm’s —chunk= parameter, then TRIM/unmap requests may be ignored by the kernel.
There are two ways to enable TRIM support in your Linux VM. As usual, consult your distribution for the recommended approach:
Use the discard mount option in /etc/fstab , for example:
In some cases the discard option may have performance implications. Alternatively, you can run the fstrim command manually from the command line, or add it to your crontab to run regularly:
Everything you need to know about Business Strategy Development and Execution
How to use a RAID log
Most of the projects I work on use some form of RAID log.
RAID stands for Risks, Actions, Issues and Decisions. The RAID log is a simple tool to keep track of all of these, which can be very useful in regular project meetings as well as for audit purposes.
- Risks represent those things that could go wrong, either in the execution of the project (project risks) or in the underlying business being changed or created (business risks). For each risk we normally estimate the probability and impact of it happening, and also any actions we are taking to mitigate against it. We also normally assign an owner, who is responsible for monitoring and mitigating against the risks, and set a future review date on which it will next be assessed.
- Actions represent all the things that need to be done. These are typically actions that arise during project meetings and don’t necessarily include all of the actions already on the project plan. Each action should have an owner, a due date, and eventually a date on which the action was completed. Each project meeting should review actions, to mark off those completed and review progress against those not yet completed.
- Issues are known problems within the project or the business being changed or created. Many people think of issues are risks which have already happened. Issues are typically notified up the management chain, for example to a project steering committee or business executive team. For each issue, you should identify what you intend to do about it, who should do this, and when it should next be reviewed. Issues should be marked as resolved once the problem is sorted out or the project moves past or around the obstacle.
- Decisions are simply a list of decisions made in a project. They are simply listed as a record of decisions made. You can also record when the decision was made, and by whom it was made.
The secret to a good RAID log is to record the right risks, issues and decisions at the right level of detail. Too many and in too much detail simply creates unnecessary bureaucracy. Too few in too little details does not provide a sufficient record of the state and progress of the business. For example, I have often seen risk logs populated with boilerplate risks: these are generic project risks, such as ‘we may not get sufficient executive sponsorship’ which don’t really add any insight to the project. (If you genuinely think that is a risk, you be better of identifying the underlying reasons which might cause this, and then expressing the risk in those terms.)
RAID logs are an excellent governance mechanism, and worth keeping even if for no other reason than so that you have all of the information to hand if the internal audit department or some other stakeholder decides to audit your project. But don’t forget that projects are fundamentally about doing things. So identifying risks and issues without also identifying actions to mitigate them, and making decisions to resolve them will not get you very far.