• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Ergaster

  1. Managing 2 different Vista machines, one a 32 bit and the other x64, both using auto-login and a large-icon Quick Launch toolbar (for seniors). The Quick Launch toolbar keeps collapsing and getting reset to the left under a >> popup menu when the systems are started or waked. Not every time though. Been struggling with this for years, and the only fix I can rely on is having them roll back the OS partition to the last ghost backup, which takes 10 minutes. Data partition with browser and email settings survives a restored OS. My reading of this is that a default setting is being used in error. I've read things about a "race condition" involving auto-login that may have been fixed in some versions of Windows, but no updates have fixed these machines yet. I don't see this problem in XP sp3, but there's no UAC running, so maybe that's it. The toolbars should hold their settings, period. Since this bug will lose any extra toolbars and leave only the default Quick Launch folder toolbar, it makes things very difficult for novice users. Also, possibly related, some desktop icons that were moved to toolbar folders suddenly show up again on the desktop, and the documents folder loses its "by type" sorting. What I'd like to do is find the custom settings for the user account, and copy them over the default settings in the registry, adding a tracer tag so I know when the bug has struck, like making Quick Launch show its title text or something. That way, no matter what, even if this bug is never fixed, the machines will look and work as set. Poking around the registry has turned up some possible locations involving "streams" keys, but I don't know how they work. If you do, point us in the right direction, please.
  2. I got one of those Expresscard/34 power-over-eSATA/USB combo cards that come with a full-connector notebook drive cable: I/O Crest - Syba sy-exp50028 that uses the JMB360 chip The driver install is confusing enough, but I did find the right one. It's supposed to show up as a SCSI controller. There's some other RAID drivers, and Vista/7 have some AHCI driver they want to use, but the driver on the CD is what I use. Thing is, I have not managed to get any significant data across it, on 2 very different laptops, using XPsp3, Vista, and Win7. I can get directories and file properties, but if I try copying anything big, I get a burst that looks real promising, then nothing....and eventually the drive resets with a clank. Also, the card is making the card slot hot, but I popped the case off and ran it, and nothing on the board is that warm. It's like the thing is causing mainboard components to heat up. I looked for solder bridges and damage inside, but nothing obvious. Pass-through USB works, I think. So it's now a $20 aux power jack for the Y cable on my USB2 enclosure. Are the JMB360 boards generally this trouble-prone? Any known-good identical-function Expresscards out there? I liked the idea of just popping a drive on the end of a cable and hot-plugging for temporary high-bandwidth access to it.
  3. If there is free space showing where your drives should be, I'd suspect that there are no partition table entries for their partitions in the Master Boot Record of the physical drive. The drive hiding tool must have used the MBR to manage the partitions. If the partitions are there, and use standard NTFS or FAT32 file systems, you can use any number of tools to find the partition boot sectors, from which the location and size of the partitions can be determined and used to place entries in the MBR, which will make the partitions visible again. If they're encrypted, then you may have additional work to do. I'd suggest downloading testdisk and let it try to find any obvious deleted partitions.
  4. There were 3 boot sectors - two matching ones at either end of the partition, and one at 63 which pointed to an offset that was all FF FF FF FF.... no other $Mft could be found. The de-coupling of the file system and the data means that the new/only $Mft and all of its external attributes are not cluster-aligned with the "original" data storage area, so it has to move. Going with my original plan, did the surgery, the patient survived, and now has a fully intact memory! Here's what was done: - Manually recover/delete a file that was in the way of new $MftMirr & $UpCase copies - Copy the backup boot sector up-1985 sectors & set its "hidden sectors" to 6081 - Copy $MftMirr & $UpCase up-1985 sectors - Copy \System Volume Information\tracking.log up-1985 sectors - Save-out the first 5 GB of the partition starting at boot sector LBA 4096 - Put that chunk back starting at LBA 6081 - Set the boot sector "hidden sectors" to 6081 - Change its MBR start sector to 6081 - Re-mount the drive/partition The new (old) alignment now puts a file header at the start cluster of each file, which is so much more convenient, I must say. And the fragments don't contain parts of other files, which helps. That 5 GB chunk procedure was a time-saver, as it contains all the rest of the metafiles, folders, etc, and luckily no data files near it. It can be put back at 4096 with minimal fuss, but so far it looks good at 6081. MyDefrag is handy for visually locating metafiles in relation to data when planning what to copy and detecting possible overwrite conflicts, keeping in mind that it reports only what the file system has mapped, and that the data files are not inside those boundaries if they're not aligned. But it gives a good overall picture of what's where, with zoom capability. Even if every metafile and log doesn't have to be moved, it's best to keep everything while we shift sector-alignment of the storage area. Since I don't know what can safely be left out, and I don't want the OS to re-create files or write anywhere that isn't already allocated to the file system, the partition should mount as if we had done nothing to it. Once the files are copied off, the drive will be zeroed and re-partitioned with proper alignment under Win7. Technically, the fixed partition is functional as-is, but the drive controller has to do more work serving-up clusters that span the Advanced Format 4k sectors it uses natively. Solve one problem, create another...
  5. Ok, so you're going to read a book. The cover looks fine, the table of contents looks fine, the book has all its pages, and everything is ready to go, so you go to a chapter on the page where it's supposed to start, and the chapter doesn't start there! You're looking at the last several pages of the previous chapter, and have to turn several pages before you find the chapter heading of what you wanted to read. The partition in question is like that, except it uses logical sectors and is fragmented. You have to add 1985 logical sectors to everything to find what you expected to find. But all the fragment runs in a given file will truncate 1985 sectors early. You can run past the truncation with a disk editor and see the rest of the fragment, which is more obvious if there is free space or something very different after it. I went around and randomly checked several dozen files all over the partition, and they're all affected the same way. Nothing is "wrong" with the $MFT from a structural standpoint, so you can check the disk with any program and find no problem. It just points to the wrong places in the user-data. It doesn't point to the wrong places in its own infrastructure, such as folder lists and its own metafiles.
  6. What's the easiest way to "shift the ground" under a mis-aligned NTFS file system? After a bad interaction with Linux partitioning software, a properly aligned NTFS 4k_cluster storage-only partition on an advanced format drive got hosed, and Vista re-built the file system seemingly from scratch. There are scant to no traces of any other valid file system except an abandoned NTFS boot sector at 63 (!) which references a $MFT that is literally all FF'd-up. The valid boot sector is at 4096, and there is no way back to "what-was" within the current file system. The data is all there, the partition checks out perfect, but the references to fragment runs are -all- 1985 sectors early! Obviously, that doesn't line up with the clusters or the files. My first thought is - can I just copy the NTFS boot/backup sector, $MFT/metadata files, and folder files, up 1985 sectors (holding their relative positions) and update the MBR? Is there more to this? There's a bunch of empty space before and after the data in the partition. Only need to copy essential infrastructure to mount and copy partition contents, or not even mount it and use Ghost from DOS to image a properly aligned NTFS. Could change the cluster size to 512 and recalculate every fragment run (hundreds). I have the output of Microsoft's old NFI tool for this partition, giving me logical sector-based runs for every file. Working from that, I can manually offset and concat each fragment of a given file with a disk editor, but that's a last resort. Tips or tools to make this LESS complicated?
  7. I've been playing around with DISKEDIT on the FAT of a 32GB FAT32 logical drive, and by marking unused FAT entries with the number 268435447 (0x0FFFFFF7) they show up in any defrag program as BAD clusters, and it's PERFECT for my purposes, whatever they are. DiskTune or SpeedDisk shows even a SINGLE bad cluster, no matter how you scale the window, no matter how big the drive is. I can get readouts of cluster numbers, see what files are occupying allocated clusters, look for holes and interleaved files, change the colors of the map, all in relation to the newly-marked free clusters that meet certain criteria. The tradeoff is, either I write graphics code that can do that, or use the FAT of a big drive as a free pixel map. Reduce, reuse, recycle. Plus, it's just plain subversive and wrong. The bad clusters can be cleared like nothing ever happened. Blasphemy, to be sure, but there it is. Let's require everybody to do it that way from now on. I will try experimenting with DOS assembler. There's some reference stuff to look at here. Can WDM be used for FAT/cluster work? By mere mortals, that is.
  8. Thought your bladder would've been empty by now. Anybody else have something to contribute? This is an interesting area to explore. What are some good pages on FAT32? I'm thinking that Win32ASM might be the way to go for development of small utils like this.
  9. See end of 1st paragraph , post #3.
  10. Unless I'm completely misunderstanding this, BAD clusters are marked in the FAT, independent of any directory or other area on the disk. The clusters themselves are not written-to. Bad clusters are shown by all defraggers I've ever used. If I mark some free clusters as BAD, there's no harm to valid files on the disk, and it can be undone easily. What do I put in the FAT? Let's say just for starters I use a sector editor to change FAT entries. What is the simplest development tool to do sector/cluster reading and manipulate the FAT? This is low-level stuff, so I don't need a full-blown feature-bloated compiler, do I ?
  11. I don't want to write a defragger (!), just USE one to display free-space clusters that I have marked because they do or don't contain data. Very simple goal. Have to read the FAT and step through all free clusters looking for things other than zero. Then if a free cluster has even one set bit, it needs to be marked temporarily such that it shows up as a different color than empty free space. A single cluster may not show up well, but that's where a log output (to a different drive) would help. To save time, once anything is encountered or the string is found, break off and step to the next free cluster. Undo would be based on the log or by just resetting any bad clusters. Most drives don't show bad clusters anyway, but we can check and log those first. Now how is this different than a recovery program that looks for file sigs and saves contiguous or FAT-image referenced clusters to a different drive, you ask? For one, it doesn't copy anything, and second, it can operate over huge logical drives without writing to any new data areas, and it should be able to undo what it does. It provides a GRAPHICAL means to check for data structures in the free space of drives without having to write any graphics code. Should be a good starter project with a short learning curve. Even shorter would be the marking of valid files, but I'm guessing that would be by setting a directory attribute, not touching it's FAT-chain.
  12. Been a while since I tried any programming, and I have a 98se system to experiment with, so here goes: I want to search ONLY the free space for zeroed clusters and mark them so they show up visually in a defrag program's initial map display. Show the "clean free space" or the inverse. So, would marking them BAD be "good"? That won't cause the IDE controller to spare-out those areas, I hope. Since the above is in the free space, we don't have to worry about chains, but what if we wanted to highlight a particular file's layout on the disk? First of all, any utilities do that already? I've seen some defraggers show "unmovable" files. Can we toggle that attribute for any file? How about raw clusters? Suggestions as to what development tools to use for the easily intimidated procedural thinker?