Jump to content

SlugFiller

Member
  • Posts

    127
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Israel

About SlugFiller

SlugFiller's Achievements

0

Reputation

  1. @Queue: "Visual", by definition, denies any "speed and resource benefits". It is probably many times slower than JIT-compiled Java code, and while I can't testify as to the library size, the memory usage is undoubtedly much higher. It was one thing if this was C, then you'd have good performance, and it wouldn't be proprietary. It would also be easily portable, although not necessarily cross-platform (depends on whether the C runtime is used, or the Win32API). Also, I hope I don't have to explain the issues with a language with a proprietary dynamically linked run-time (although I suppose Windows may also be counted as a "proprietary runtime"). Especially not on this forum, in this thread (*cough* VCRun2008 *cough*).
  2. Why are you using Delphi? JDK is available for free from the Sun Java website. And it's a much better language too, although that doesn't really say much. I mean Pascal+Visual+Proprietary-Any chance of cross-platform compatibility=Worthless language. According to the documentation I got, the original 95 IFSMgr had a bug in that it cache the header of the unicode.bin, but its cache was only large enough to contain 18 language entries. The documentation was a bit unclear, but it might be able to work with more languages, provided the OEM and ANSI code pages are on the same 18-language block. Certain localized versions did not have the bug. I'm not sure about the status of the 98 driver, although I imagine it probably fixed the bug already. The format itself should support up to a 4GB file. I didn't feel like pushing it to the limit. My original unicode.bin only had 5 code pages, and I've added an extra 4 (all East Asians). Your 30-pages bin should probably go on mgdx or something. Could you list the code pages you have there? The code page data itself is from the Microsoft contribution to unicode.org. I made a small javascript page which converts the text files into the necessary Java commands. Obviously, I didn't sit down and manually input 60000 character codes. So any "blanks" in the data are the official Microsoft stance on the code page. There are no coincidences there, if it matches what you already have.
  3. I've finally managed to compile and run the VxD equivalent of "Hello world", coded in C++ using 98DDK and VS6's CL. This will give me the ability to experiment a bit, and see if I can get something akin to Unicode file access, one way or another. One issue is that once the VxD is loaded, it cannot be unloaded or reloaded, even if the file is modified. The only solution is to restart the computer for every revision. That gets old really fast. P.S. 8 downloads, 0 comments
  4. I've made a small Java program for creating my unicode.bin. Attached below. It is roughly 400 lines of code, plus 60000 lines of CP data. It would be easier than trying to make such a program yourself. The bin uses a tree structure to define lookup ranges, and my program automatically sorts it out as a binary tree. It could be slightly improved by not creating a new range for a gap of less than 6 characters, but instead padding with underscores. This would create a slightly smaller bin. MakeUnicodeBin.rar But, like I said, I'm looking for a more comprehensive solution. I am contemplating using the file system hook to convert base64 or hex filenames into unicode ones. To ensure compatibility, though, I want it to use a special device name which wouldn't collide with real devices. But I'm not sure whether the hook is even called for device names which don't exit. The alternative is to use Ring0 FileIO. This is a bit more complex, since it requires quite some book-keeping, and consideration of edge-cases. Unfortunately, developing VxDs is VERY complex. I still have a lot to learn before I can properly start testing. If I am successful, though, patching KEx would be incredibly easy. The new architecture is even easier to extend and deploy than the previous one. Making file APIs which attempt to detect a hex-to-unicode helper device is actually quite simple.
  5. @Tihiy: I was actually wondering more about your experiment process. Did you try to create a unicode-named file, only to see it created with underscores? Did you verify the underscores were actually in the filename, and not just in display? (If it is just in display, the file would not load in any ANSI program) @Joseph_sw: I've already replaced my Kernel32. I've also messed around with the NLS files. They effect the GUI behavior, but not the file system. Having a mismatch between the GUI code page and the FS code page just causes files of either locale to be inaccessible. Well, I've done some more research. Apparently, IFSMgr has an ANSI API which is used by Kernel32. (I think it also has a Unicode API, but it's hard to get concrete details on IFSMgr's API) When it receives an ANSI filename, it makes a conversion using tables located in Unicode.bin (One file for all CPs, now there's a bottleneck). I've modified it to add a few extra code pages, and successfully gotten access to localized files. It uses the same registry key as the GUI code page, but is more limited since it uses a single data file (instead of the pluggable NLS files). It does require restart every time I want to change code pages, and every time I only have access to some of my files. It only support encodings up to 2 bytes per character, so creating a UTF-8 code page is out of the question (unless I want to recode IFSMgr itself). Still, there should be a way to access IFSMgr using Unicode calls, either directly from the kernel, or from a dedicated VxD. If I could find more information about its API, maybe I could get true Unicode file access to work. One thing I did notice was that Explorer isn't exactly "well behaved" when it comes to double-byte filenames. It displays them okay in the list, but improperly when renaming. It also fails to start up localized files when clicked (DnD into an application works, though). I wonder if there's an XP Unicode version of Explorer I could use or something.
  6. What do you mean by "turns them"? Doesn't it call IFSMGR directly (Which, AFAIK, takes unicode device names)? What did you test exactly? Fat32 uses UTF16 long file names, so somewhere between CreateFileA and the IO subsystem, the ANSI filename is converted to UTF16 (using a distribution-dependent codepage). The only question is where. If I knew the answer to that, I could patch it area to use a more comprehensive codepage, such as UTF8. Then supporting arbitrarily localized filenames would be simple. Since I've recently tested replacing my ifsmgr.vxd with localized versions (those were hard to track down), to no effect, I think it is safe to assume it is locale-independent. This does leave the possibility that the conversion occurs -after- ifsmgr.vxd, inside vfat.vxd. I did try replacing it, to no effect, but I didn't choose highly varied locales there, so maybe it's just a coincidence. Still, it doesn't seem too likely, considering it gets its input as a unicode string already. Unfortunately, I couldn't get a good description of the path taken between Kernel32 and ifsmgr. Like I've said, I've tried finding a good DLL-capable disassembler to get a better view of the CreateFileA implementation, with no success. I did start poking around my copy of the DDK to try and see if I could write a file-system driver. I figured, I could, at the very least, write a driver that takes the filenames as hex-encoded or base64-encoded strings, and converts it to file name using some UTF convention. I could then use that to either call another driver (one which supports unicode filenames), or implement my own version of the FAT32 file-system. Unfortunately, I could find any good examples of file system implementations using the DDK. I did find one which uses VtoolsD, so maybe that can help. It is also quite disturbing that not one of the DDK examples is written in pure C or C++, all use assembler code.
  7. Do NOT use uTorrent. Regardless of your OS. uTorrent has a buggy upload manager. It fails to upload to other clients. Presence of uTorrent clients in a tracker usually result in the torrent being much harder to download. The increasing proliferation of uTorrent clients is killing the BitTorrent network. For torrents I usually use BitTransmission on Linux. Keeping a Linux box or dual-boot especially for your torrents is worth the effort. I've personally observed significant performance differences with the same torrent files. If a second OS is not your thing, try FDM. It has torrent support, isn't too big, and works decently well.
  8. Yet to be tested. I'm afraid it's not that easy. I'm basing this on the FileMon code. I've actually ran FileMon on my 9x, so I know it works. If I knew how to load and call VxD functions directly from KernelEx, it wouldn't be too difficult for me to create my own VxD to do the Zw stuff, if necessary. Hmm... In fact, maybe I can create a namespace which takes hexa-decimal strings, and converts them to unicode, then relays to the appropriate sub-driver. Then, the only thing I would have to do is add a bin2hex in CreateFileW, and pass the result to CreateFileA. The question is, would that method be enough to support all basic file operations?
  9. Hmm... I've been looking up information on VxDs recently, and found out something interesting. Apparently, Win9x's file system manager, ifsmgr.vxd, as part of Microsoft's effort to be backwards compatible with NT4, uses unicode strings to identify resources. The kernel-mode ZwCreateFile takes a unicode string, which may be generated from a wide-char string, and this apparently applies to 9x. So, the theory is, if these calls are made directly from a KEx API, 9x can be made to support real unicode versions of CreateFile and similar functions. In other words, 9x can support filenames in any given locale on any given version. And this does not require any VxD patching or rewriting. Now, obviously there are a few issues. The first of all being, can Kernel32 truly call VxD or DDK functions directly? Secondly, what sort of conversion might be required between a handle returned by the kernel-mode ZwCreateFile, and the handle used by user applications for calls such as FileReadA? Are they identical? Does Kernel32 keep its own objects and/or handles? Short of decompiling Kernel32 (I've yet to find a half-descent PE-file disassembler), I guess the only way to test is with trial and error. Of course, kernel-level error can be very risky. One thing I was wondering about is if the pre-application KEx configuration can be extended. If true unicode file access is a possibility, it would be nice to set the code page for ANSI file functions, such as CreateFileA, on a per-program basis. But there is a question of whether that configuration can be read from the overridden CreateFileA itself (or if that would create some sort of infinite loop). P.S. Does anyone ever read my posts? I never seem to get a reply, and I can't help but wonder if they are even visible to other people.
  10. Woo! Inkscape finally works! Pango-Cairo finally works! The bad news are, I don't need it anymore. I've found a much better SVG library for Java, and for my browser I've switched to K-Meleon, the Win32 port of the Gecko engine, which works just fine on 9x (actually, I should say "it works faster than any browser I've used to date", but, you know...). Well, apps have never been a good reason to do hard OS works. Someone, somewhere, has already made a more compatible and better featured version. So I'll be testing out various games soon. Will report if I find anything of interest. By the way, if I'm reading the source correctly, there is no more need for code generation to create a new API in v4. Rather, the APIs are compiled directly. Is that correct? I may consider porting my filename extra compatibility filter to the new architecture. Actually, from the looks of it, it won't require too much porting.
  11. It's been a while since the last update. I hope this is still being worked on. With SeaMonkey setting its official version to 2.0, and recent games requiring GfWL, the need for KernelEx is becoming greater than ever. In its current incarnation, it doesn't actually run any of the real XP-only programs (The ones that need more than a Windows version change and a W-to-A). Perhaps work should proceed in the "Pick program - get it to run" methodology. I mean this in a more public form - users on this board could help test versions of KEx with a specific program, chosen by Xeno86 or tihiy, thus giving a clearer objective and focus, which would help make programs fully usable on a maximal number of configurations.
  12. Two questions: 1. Could the new KEx architecture be conceivably used to stub-out missing imports from non-system dlls? For example, could I choose for a gtkmm dll placed in an application's binary folder to stub-fake a non-critical void-return function? 2. How often does KEx update now? When should we expect RC 3?
  13. Set it to run in XP mode. Well, tried that, got a weird looking over-sized empty dialog, and the program never started up. It didn't exactly work perfectly on KEx 3, but it worked better than that. Maybe I'm missing something...
  14. How do I activate AdvancedGDI for a program? I want to try Inkscape in the new KEX, but it seems like it still doesn't come with AdvancedGDI enabled by default, despite being a GTK2.8 program.
  15. This form already has improved generic ATA drivers. Generic SATA drivers are coming soon, thanks to the efforts of Xeno in porting their NT counterparts to 9x. Contrary to intuition, generic drivers are usually better than vendor specific ones, probably because they are desired by a vaster audience, causing greater motivation for development, and have a far wider testing audience. That, and focusing on standard compliance in hardware use is not that different from applying standard compliance in code paradigms, which commonly adds stability. In other words, "less hackity-hack-hack". That being said, you are somewhat exaggerating in your claims that Intel may have been influenced to discontinue their drivers. Intel where always behind on drivers, even before certain OSs reached EOL. They hardly ever release any worthwhile drivers for any OS, and the ones they do release are usually buggy as hell. If you ever had the misfortune of coming to use Intel hardware, I got one tip for you: Third party drivers. Would have given the exact same tip 8 years ago, too.
×
×
  • Create New...