還有補充一點,如果在中風24小時內,能服食「北京同仁堂的牛黃解毒丸」,可減輕中風後的後遺症。注意不是「牛黃解毒片」。「北京同仁堂的牛黃解毒丸」要幾百完港幣一粒,外層包了金箔的。
還有一點,我是中風高急人使,如果是朋友的話,記住用這遍文章內的方法救我,不用客氣。
在文中的所謂一滴血,經過中醫的解釋一滴大約為「黃豆」的大小。
原文開始:
我曾經親身經歷,換回我老爸的生命.....就因為曾看過這篇記載~救了爸爸!
患了中風,腦部的微血管,
會慢慢的破裂,遇到這種情形,千萬別慌,
患者無論在什麼地方(不管是浴室、臥房或客廳),
千萬不可搬動他。
因為,如果移動,會加速微血管的破裂 。
所以要先原地把患者扶起坐穩以防止再摔倒,這時才開始(放血)。
家中如有專為注射用的針,當然最好,
如果沒有,就拿縫衣用的銅針,
或是大頭針,用火燒一下消毒,
就在患者的十個手指頭尖兒
(沒有固定穴道,大約距離手指甲一分之處)
刺上去,要刺出血來 (萬一血不出來,可用手擠),
等十個手指頭都流出血來(每指一滴),
大約幾分鐘之後,患者就會自然清醒
如果嘴也歪了,就拉他的耳朵,把耳朵拉紅,
在兩耳的耳垂兒的部位,各刺兩針,
也各流兩滴血,幾分鐘以後,嘴就恢復原狀了。
等患者一切恢復正常感覺沒有異狀時再送醫,
就一定可以轉危為安,否則,若是急著抬上救護車送醫,
經一路的顛跛震動恐怕還沒到醫院,他腦部微血管,
差不多已經都破裂了。萬一能夠吉人天相,保全老命,
能像孫院長,容得勉強行動,那得要靠祖上的庇蔭了。
放血救命法,是住在新竹的中醫師夏伯挺先生說的。
且是經自己親身實驗,敢說百分之百有效。
大概是民國六十八年我在台中逢甲學院任教,
有天上午,我正在上課,一位老師跑到我的教室上氣不接下氣的說:
劉老師快來,主任中風了;
我立刻跑到三樓,看到陳幅添主任,
氣色不正,語意模糊,嘴也歪了,很明顯的是中風了。
立即請工讀生到校門外的西藥房,買來一支注射用的針頭,
就在陳主任十個手指頭上直刺。
等十個手指尖兒都見血了(豆粒似的一滴),
大約幾分鐘以後,陳主任的氣色就變過來了,
兩眼也有神了,只有嘴還歪著,我就拉搓他的耳朵,
使之充血,等把耳朵拉紅,就在左右耳垂之處,
各刺兩針,待兩耳垂都流出兩滴血來,奇蹟就出現了,
大約不到三五分鐘,他的嘴形,恢復正常了,說話也清清楚楚了。
讓他靜坐一陣子,喝了一杯熱茶,才扶他下樓,
開車送到惠華醫院,打一罐點滴,休息了一夜,
第二天就出院回學校上課了。
一切照常工作,毫無後遺症。
反觀一般腦中風患者,
都是送醫院治療時,經過一路震盪血管急速破裂,
以致多數患者一病不起,所以腦中風,在死因排行榜上高居第二位,
其最幸運者,也僅能保住老命,而落得終身殘廢。
這是一個多麼可怕的病症。如果大家都能記住這(放血救命)的方法,
立刻施救,在短短時間它能起死回生,
而且保證百分之百的正常。
這個急救法,希望大家告訴大家。
那腦中風,在死因排行榜上,就可以除名了。
●閱後傳知他人,功德無量●
[ 發表回應 ] ( 12預覽 ) | 常註連結 | ( 2.9 / 1567 )
#psrinfo -pv
The physical processor has 1 virtual processor (0)
x86 (GenuineIntel family 6 model 23 step 10 clock 2833 MHz)
Intel(r) Core(tm)2 Quad CPU Q9550 @ 2.83GHz
[ 發表回應 ] ( 8預覽 ) | 常註連結 | ( 3 / 1384 )
# zfs-backup.pl
#!/usr/bin/perl
### Design By Andrew Choi
### Backup Directory
@backuppool = ("ntpool/u0","ntpool/local");
$targetpool = "backup";
$record = "/$targetpool/zfs-backup-record.dat";
### Main Program
# Gen Date Code
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;
$year = $year + 1900;
$mon = substr "0".$mon,-2;
$mday = substr "0".$mday,-2;
$hour = substr "0".$hour,-2;
$min = substr "0".$min,-2;
$sec = substr "0".$sec,-2;
$datecode = $year.$mon.$mday.$hour.$min.$sec;
# Check the last record of record file
open (FILE,"<$record");
@temp = <FILE>;
close (FILE);
$lastrecord = pop(@temp);
chop ($lastrecord);
# Save the update datecode
open (FILE,">>$record");
print FILE"$datecode\n";
close (FILE);
# Create snapshot
foreach $pool(@backuppool) {
$command = "zfs snapshot $pool\@$datecode\n";
print $command;
$a = `$command`;
}
# Backup method
if ($lastrecord eq "") {
# Full Backup
foreach $pool(@backuppool) {
$command = "zfs send $pool\@$datecode \| zfs recv -F $targetpool/$pool\n";
print $command;
$a = `$command`;
}
} else {
# Incremental Backup
foreach $pool(@backuppool) {
$command = "zfs send -i $pool\@$lastrecord $pool\@$datecode \| zfs recv -F $targetpool/$pool\n";
print $command;
$a = `$command`;
}
}
# clean-snapshot.pl
#!/usr/bin/perl
### Design By Andrew Choi
### Clean snapshot
@backuppool = ("ntpool/u0","ntpool/local");
$targetpool = "backup";
$record = "/$targetpool/zfs-backup-record.dat";
### Main Program
# Check the last record of record file
open (FILE,"<$record");
@temp = <FILE>;
close (FILE);
if ($temp[0] eq "") {print "No any snapshot for clean\n"; exit;}
# Clean snapshot method
foreach $datecode(@temp) {
chop ($datecode);
foreach $pool(@backuppool) {
$command = "zfs destroy $pool\@$datecode\n";
$a = `$command`;
print $command;
$command = "zfs destroy $targetpool/$pool\@$datecode\n";
$a = `$command`;
print $command;
}
}
# Remove the record file
$command = "rm $record\n";
print $command;
$a = `$command`;
[ 發表回應 ] ( 19預覽 ) | 常註連結 | ( 2.8 / 1715 )
Client Side
Add iSCSI Address discovery :-
# iscsiadm add discovery-address TargetServerIP:Port
e.g.
# iscsiadm add discovery-address 192.168.105.141:3260
iSCSI Address discovery list :-
# iscsiadm list discovery-address -v TargetServerIP:Port
e.g.
# iscsiadm list discovery-address -v 192.168.105.141:3260
Discovery Address: 192.168.105.141:3260
Target name: iqn.1986-03.com.sun:02:608dc428-415c-4a31-858d-f7c62ebee084
Target address: 192.168.105.141:3260, 1
Add iSCSI Device :-
#iscsiadm add static-config <target-name,target-address[:port-number][,tpgt] ...>
e.g.
iscsiadm add static-config iqn.1986-03.com.sun:02:608dc428-415c-4a31-858d-f7c62ebee084,192.168.105.141
Check Solaris Storage Device Code of iSCSI Device:-
#format < /dev/null
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 33415 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 <VMware,-VMware Virtual S-1.0-256.00GB>
/pci@0,0/pci15ad,1976@10/sd@1,0
2. c2t0100000C29F413EC00002A004A00770Ed0 <SUN-SOLARIS-1-100.00GB>
/scsi_vhci/disk@g0100000c29f413ec00002a004a00770e
Specify disk (enter its number):
The Storage Device Code is c2t0100000C29F413EC00002A004A00770Ed0
未完成文章
[ 發表回應 ] ( 6預覽 ) | 常註連結 | ( 3 / 1519 )
Orig : http://breden.org.uk/2008/03/12/home-fi ... r-backups/
Author : Simon
Now that you’ve got your ZFS Home Fileserver up and running and you’ve got your file systems created and shared to other machines on your home network, now’s the time to consider getting some backup policy in place.
I’ll show a few different possibilities open to you.
It’s all very well having a RAIDZ setup on your fileserver with single-parity redundancy and block checksums in place to protect you from a single failed drive, and snapshots in place to guard against accidental deletions, but you still need backups, just in case something really awful happens.
I built myself a backup machine for around 300 euros, using similar hardware as described in the Home Fileserver: ZFS hardware article, but reduced the cost by using cheaper components and using old SATA drives that were lying on the shelf unused. I will describe the components for this backup machine in more detail elsewhere.
For the purposes of this article, we’ll perform a backup from the fileserver to the backup machine.
The backup machine has Solaris installed on an old Hitachi DeathStar IDE drive I had lying around. These drives don’t have a particularly stellar reliability record, but I don’t care too much as nothing apart from the OS will be installed on this boot drive. All ZFS-related stuff is stored on the SATA drives that form the storage pool and this will survive even if the boot drive performs its ‘click of death’ party trick
The SATA drives I had that were lying around were the following: a 160GB Maxtor, a 250GB Western Digital, a 320GB Seagate and a 500GB Samsung. In total these drives yielded about 1.2TB of storage space when a non-redundant pool was created with them all. I chose to have no redundancy in this backup machine to squeeze as much capacity as possible from the drives, after all, the data is on the fileserver. In a perfect world I should probably have redundancy too on this backup machine, but never mind, we already have pretty good defences against data loss already with this setup.
So let’s create the ZFS storage pool now from these disks. First let’s get the ids of the disks we’ll use:
# format < /dev/null
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@4/ide@0/cmdk@0,0
1. c1t0d0
/pci@0,0/pci1043,8239@5/disk@0,0
2. c1t1d0
/pci@0,0/pci1043,8239@5/disk@1,0
3. c2t0d0
/pci@0,0/pci1043,8239@5,1/disk@0,0
4. c2t1d0
/pci@0,0/pci1043,8239@5,1/disk@1,0
Specify disk (enter its number):
#
Disk id 0 is the boot drive — the IDE disk. For our non-redundant storage pool, we’ll use disks 1 to 4:
# zpool create backup c2t0d0 c2t1d0 c1t1d0 c1t0d0
#
# zpool status backup
pool: backup
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
errors: No known data errors
#
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
backup 1.12T 643G 503G 56% ONLINE - < -- here's one I already used a bit
#
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
backup 1.07T 28.1G 19K /backup
#
This created a storage pool with around 1.12TB of capacity. I have shown data from a pool that was created previously, so it shows that 56% of capacity is already used.
Let’s try out iSCSI
As I’d heard that iSCSI performs well, I thought it should make a good choice for performing fast backups across a Gigabit switch on a network.
Quoting from wikipedia on iSCSI we find this:
iSCSI is a protocol that allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a popular Storage Area Network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally-attached disks. Unlike Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure.
Sounds good, let’s try it between these two Solaris boxes: (1) the fileserver, and (2) the backup machine.
At this point, I’m trying to choose from the notes I kept of the results of various experiments I did with iSCSI to see which commands I performed. But here is another nice feature of ZFS: it keeps a record of all major actions performed on storage pools. So I’ll ask ZFS to tell me which incantations I performed on this pool previously:
# zpool history backup
History for 'backup':
2008-02-26.19:40:29 zpool create backup c2t0d0 c2t1d0 c1t1d0 c1t0d0
2008-02-26.19:43:24 zfs create backup/volumes
2008-02-26.20:07:24 zfs create -V 1100g backup/volumes/backup
2008-02-26.20:09:16 zfs set shareiscsi=on backup/volumes/backup
#
So we can see that I created the ‘backup’ pool without redundancy, then created a file system called ‘backup/volumes’, then created a 1100GB (1.1TB) volume called ‘backup’. Finally, I set the ’shareiscsi’ property of the ‘backup’ volume to the value ‘on’, meaning that this volume will become an iSCSI target and other interested machines on the network will be able to access it.
Let’s take a look at the properties for this volume.
# zfs get all backup/volumes/backup
NAME PROPERTY VALUE SOURCE
backup/volumes/backup type volume -
backup/volumes/backup creation Tue Feb 26 20:07 2008 -
backup/volumes/backup used 1.07T -
backup/volumes/backup available 485G -
backup/volumes/backup referenced 643G -
backup/volumes/backup compressratio 1.00x -
backup/volumes/backup reservation none default
backup/volumes/backup volsize 1.07T -
backup/volumes/backup volblocksize 8K -
backup/volumes/backup checksum on default
backup/volumes/backup compression off default
backup/volumes/backup readonly off default
backup/volumes/backup shareiscsi on local
backup/volumes/backup copies 1 default
backup/volumes/backup refreservation 1.07T local
#
Sure enough, you can see that it’s shared using the iSCSI protocol and that this volume uses the whole storage pool.
This iSCSI shared volume is known as an ‘iSCSI target’. In iSCSI parlance there is the concept of iSCSI targets (server) and iSCSI initiators (clients).
Now let’s enable the Solaris iSCSI Target service:
# svcadm enable system/iscsitgt
Now let’s verify that the system indeed thinks that this volume is an iSCSI target before we proceed further:
# iscsitadm list target -v
Target: backup/volumes/backup
iSCSI Name: iqn.xxxx-xx.com.sun:xx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Alias: backup/volumes/backup
Connections: 1
Initiator:
iSCSI Name: iqn.xxxx-xx.com.sun:0x:x00000000000.xxxxxxxx
Alias: fileserver
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
VID: SUN
PID: SOLARIS
Type: disk
Size: 1.1T
Backing store: /dev/zvol/rdsk/backup/volumes/backup
Status: online
#
This was performed after the iSCSI initiator was configured and connected, so you see ‘Connections: 1′ and the initiator’s details.
Now we’re done with setup on the backup server. We’ve created a backup volume with 1.1TB of storage capacity from a mixture of old disparate drives that were lying around and we’ve made it available as an iSCSI target to machines on the network, which is needed, as we want to allow the fileserver to write to it to perform a backup.
Time to move on now to the client machine — the fileserver, which is known as the iSCSI initiator.
Let’s do the backup
Now we’re back onto the fileserver, we need to configure it to enable it to access the iSCSI target we just created. Luckily with Solaris that’s simple.
iSCSI target discovery is possible in Solaris via three mechanisms: iSNS, static and dynamic discovery. For simplicity, I will only describe static discovery — i.e. where you specify the iSCSI target’s id and the IP address of the machine hosting the iSCSI target explicitly:
# iscsiadm modify discovery --static enable
# iscsiadm add static-config iqn.xx-xx.com.sun:xx:xx-xx-xx-xxxx-xxxx,192.168.xx.xx
Now that we’ve enabled our fileserver to discover the iSCSI target volume called ‘backup’ on the backup machine, we’ll try to get hold of its ‘disk’ id so that we can create a local ZFS pool with it, after all, it’s a block device just like any other disk, so we can let ZFS use it just like a local, directly-attached, physical disk:
# format < /dev/null
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@4/ide@0/cmdk@0,0
1. c1t0d0
/pci@0,0/pci1043,8239@5/disk@0,0
2. c1t1d0
/pci@0,0/pci1043,8239@5/disk@1,0
3. c2t0d0
/pci@0,0/pci1043,8239@5,1/disk@0,0
4. c3t0100001E8C38A43E00002A0047C465C5d0
/scsi_vhci/disk@g0100001e8c38a43e00002a0047c465c5
Specify disk (enter its number):
#
The disk id of this backup volume is the one at item number 4 — the one with the really long id.
Now let’s create the storage pool that will use this volume:
# zpool create backup c3t0100001E8C38A43E00002A0047C465C5d0
#
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
backup 1.07T 623G 473G 56% ONLINE -
tank 2.03T 1002G 1.05T 48% ONLINE -
test 3.81G 188K 3.81G 0% ONLINE -
#
Voila, the pool ‘backup’ which used the iSCSI target volume ‘backup’ hosted on the backup machine is now usable, so now let’s do the backup — finally!
For demo purposes I created a 4GB video content folder to backup. We’ll time it being sent over a Gigabit network to see how fast it gets transferred — gotta have some fun after all this aggro, haven’t you?
# du -hs ./test_data
4.0G ./test_data
#
# date ; rsync -a ./test_data /backup ; date
Thursday, 13 March 2008 00:20:55 CET
Thursday, 13 March 2008 00:21:50 CET
#
OK, so 4GB was copied from the fileserver to the backup machine in 55 seconds, which is a sustained 73MBytes/second, not bad at all!
That’s all folks!
I’ll tackle other subjects soon like incremental backups using ZFS commands and also using good old ‘rsync’.
For more ZFS Home Fileserver articles see here: A Home Fileserver using ZFS. Alternatively, see related articles in the following categories: ZFS, Storage, Fileservers.
[ 發表回應 ] ( 8預覽 ) | 常註連結 | ( 3 / 1573 )