GALGAME镜像安装 中有MDF MDS md0 md1 md2 合并wmv怎么安装

.md0 .md1 .mdf .mds这4个文件在一起的一个电影用暴风影音能播放吗?-学网-中国IT综合门户网站-提供健康,养生,留学,移民,创业,汽车等信息
> 信息中心 >
.md0 .md1 .mdf .mds这4个文件在一起的一个电影用暴风影音能播放吗?
来源:互联网 发表时间: 12:37:11 责任编辑:鲁晓倩字体:
为了帮助网友解决“.md0 .md1 .mdf .mds这4个文件在一起的一个电影用暴风影音能播放吗?”相关的问题,学网通过互联网对“.md0 .md1 .mdf .mds这4个文件在一起的一个电影用暴风影音能播放吗?”相关的解决方案进行了整理,用户详细问题包括:
电脑该4文件都播放?弄?请详细点!虚拟光驱关系?
,具体解决方案如下:解决方案1:
进入暴风阴影综合设置关联4种格式文件
1个回答1个回答1个回答1个回答1个回答1个回答1个回答1个回答1个回答1个回答
相关文章:
最新添加资讯
24小时热门资讯
Copyright © 2004- All Rights Reserved. 学网 版权所有
京ICP备号-1 京公网安备02号Why is my RAID /dev/md1 showing up as /dev/md126? Is mdadm.conf being ignored? - Ask Ubuntu
to customize your list.
Ask Ubuntu is a question and answer site for Ubuntu users and developers. J it only takes a minute:
Here's how it works:
Anybody can ask a question
Anybody can answer
The best answers are voted up and rise to the top
I created a RAID with:
sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2
sudo mdadm --detail --scan returns:
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd0e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
Which I appended it to /etc/mdadm/mdadm.conf, see below:
# mdadm.conf
# Please refer to mdadm.conf(5) for information about this file.
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST &system&
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This file was auto-generated on Mon, 29 Oct :12 -0500
# by mkconf $Id$
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd0e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat returns:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[0] sdc2[1]
blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdb1[0] sdc1[1]
blocks super 1.2 [2/2] [UU]
unused devices: &none&
ls -la /dev | grep md returns:
brw-rw----
1 root disk
1 Oct 30 11:06 md1
brw-rw----
1 root disk
2 Oct 30 11:06 md2
So I think all is good and I reboot.
After the reboot, /dev/md1 is now /dev/md126 and /dev/md2 is now /dev/md127?????
sudo mdadm --detail --scan returns:
ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd0e
ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat returns:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active raid1 sdc2[1] sdb2[0]
blocks super 1.2 [2/2] [UU]
md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]
blocks super 1.2 [2/2] [UU]
unused devices: &none&
ls -la /dev | grep md returns:
drwxr-xr-x
2 root root
80 Oct 30 11:18 md
brw-rw----
1 root disk
9, 126 Oct 30 11:18 md126
brw-rw----
1 root disk
9, 127 Oct 30 11:18 md127
All is not lost, I:
sudo mdadm --stop /dev/md126
sudo mdadm --stop /dev/md127
sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1
sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2
and verify everything:
sudo mdadm --detail --scan returns:
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd0e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat returns:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[0] sdc2[1]
blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdb1[0] sdc1[1]
blocks super 1.2 [2/2] [UU]
unused devices: &none&
ls -la /dev | grep md returns:
brw-rw----
1 root disk
1 Oct 30 11:26 md1
brw-rw----
1 root disk
2 Oct 30 11:26 md2
So once again, I think all is good and I reboot.
Again, after the reboot, /dev/md1 is /dev/md126 and /dev/md2 is /dev/md127?????
sudo mdadm --detail --scan returns:
ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd0e
ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat returns:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active raid1 sdc2[1] sdb2[0]
blocks super 1.2 [2/2] [UU]
md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]
blocks super 1.2 [2/2] [UU]
unused devices: &none&
ls -la /dev | grep md returns:
drwxr-xr-x
2 root root
80 Oct 30 11:42 md
brw-rw----
1 root disk
9, 126 Oct 30 11:42 md126
brw-rw----
1 root disk
9, 127 Oct 30 11:42 md127
What am I missing here?
26.4k92392595
I found the answer here, .
In short, I chopped my /etc/mdadm/mdadm.conf definitions from:
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd0e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
ARRAY /dev/md1 UUID=aa1f85b0:a2391657:cfd0e
ARRAY /dev/md2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
sudo update-initramfs -u
I am far from an expert on this, but my understanding is this ...
The kernel assembled the arrays prior to the normal time to assemble the arrays occurs.
When the kernel assembles the arrays, it does not use mdadm.conf. Since the partitions had already been assembled by the kernel, the normal array assembly which uses mdadm.conf was skipped.
Calling sudo update-initramfs -u tells the kernel take a look at the system again to figure out how to start up.
I am sure someone with better knowledge will correct me / elaborate on this.
Use the following line to update the initrd for each respective kernel that exists on your system:
sudo update-initramfs -k all -u
2,46853060
Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox ().
Subscribed!
Success! Please click the link in the confirmation email to activate your subscription.
sudo update-initramfs -u
was all I needed to fix that.
I did not edit anything in /etc/mdadm/mdadm.conf.
I had the same issue.
This solution solvd my problem:
I managed to replicate the issue in
the following manner:
When "Software Updater" asked if i wanted to update packages (including Ubuntu base" and kernel, i said:OK.
The newly installed kernel used the current kernel's/system's settings.
I then created the array.
Only the currently running kernel got updated with the new RAID settings.
Once i rebooted, the new kernel knew nothing about the raid, and gave it an md127 name!
Your Answer
Sign up or
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Post as a guest
By posting your answer, you agree to the
Not the answer you're looking for?
Browse other questions tagged
rev .24745
Ubuntu and Canonical are registered trademarks of Canonical Ltd.
Ask Ubuntu works best with JavaScript enabled

我要回帖

更多关于 md0 mdf mds怎么 合并 的文章

 

随机推荐