I wouldn't be so optimistic about the dates, but there you go:
-
Holographic RAM - "IBM researchers have suggested that they will have a small HDSS device ready as early as
2003. These early holographic data storage devices will have capacities of 125 GB and transfer rates of about 40 MB per second. Eventually, these devices could have storage capacities of 1 TB and data rates of more than 1 GB per second." http://computer.howstuffworks.com/holographic-memory.htm
-
Magnetic RAM
-"600GB per square inch (non-volatile)" http://www.theregister.co.uk/content/archive/10300.html
-"MRAM promises to combine the high speed of static RAM (SRAM), the storage capacity of DRAM and the non-volatility of Flash memory ... MRAM will slowly begin to replace DRAM starting sometime in
2003." http://computer.howstuffworks.com/mram.htm
-
Polymer RAM - "The equivalent of 400,000 CDs, or 60,000 DVDs, or 126 years of MPG music may be stored on a polymer
non-volatile memory chip the size of a credit card" http://www.thinfilm.se/html/technology.htm
Others:
-Molecular memory: http://www.theregister.co.uk/content/archive/7679.html
-2300GB on a PC Card: http://www.theregister.co.uk/content/archive/6099.html
"640KB will always be enough." -- Bill Gates.
From an interview with Google CEO, Erich Schmidt:
"At Google, for example, we found it costs less money and it is more efficient to use DRAM as storage as opposed to hard disks -- which is kind of amazing. It turns out that DRAM is 200,000 times more efficient when it comes to storing seekable data. In a disk architecture, you have to wait for a disk arm to retrieve information off of a hard-disk platter. DRAM is not only cheaper, but queries are lightning fast."
From the Java2 Standard Edition 1.4 datasheet:
"64-bit support provides Java technology developers with near limitless amounts of memory for high-performance, high-scalability computing. While previous J2SE releases were limited to addressing 4 gigabytes of RAM, version 1.4 allows Java applications to access hundreds of gigabytes of RAM. This enables developers to drive more applications and very large datasets into memory, and avoids the performance overhead of reading data in from a disk or from a database."
----
More Memory Breakthroughs in the past year: http://www.google.com/search?hl=en&as_qdr=y&q=%22memory-breakthrough%22+%7C+%22RAM+breakthrough%22&as_qdr=y&btnG=Google+Search @;)
Back to:
PrevalenceSkepticalFAQ.
I have 36Gb of on data on disk, how can I use this technology if I have to keep it all in RAM?
You cannot use prevalence if you don't have enough RAM. The point of this page is to show that, if your organization cannot afford 36GB of RAM today, it is just a question of time. See:
PrevalentHypothesis. --
KlausWuestefeld
If you lack enough RAM to hold the object database, raw disk I/O can be used in its place. For example, for *decades*, Forth has been using raw disk I/O in the form of "blocks" to store everything from source code to database records with absolutely outstanding performance. While thrashing the internal memory cache can occur, blocks in Forth have proven to be about as fast as normal RAM in actual practice.
Do not discount the value of simplicity. Much of persistent storage's slowness comes from the layer upon layers upon layers of abstractions of abstractions in the filesystem layers of most operating systems.
From what I've read of prevalence so far, Forth has been doing it, on 8-bit processors with as little as 32K of address space, for pretty much since the early 70s, if you treat Forth blocks as a form of virtual, persistent memory.
"blocks in Forth have proven to be about as fast as normal RAM in actual practice." Why is using virtual memory so much slower than using actual RAM, then? Are all operating-system providers plain stupid? --
KlausWuestefeld
I'm interested in using
Prevayler to persist the majority of my objects by count, but the minority by size. Since my application will be archiving data result sets, over time these results may be more than 4GB. On the other hand, the indexes on the results for result searches, the result managment, and the remainder of the application that is not result driven can use
Prevayler.
I have not gotten to figuring out how to break it down, but 90% of the tasks should simply use
Prevayler and the final result data lookup will have to be from a file rather than from in memory because of the large result size. The persisted in memory "mini-result" will contain a reference to the file location of the full result. Result searches, sorts, etc should all be in memory and very fast, with only the last least common and least expensive step going to file.
It is not always necessary to keep everything in memory, so many projects that have more data than available memory may still be able to use
Prevayler provided some of the things that require persistence.
I have not yet gotten into the project and used
Prevayler yet, so I don't know if it could benefit from the notion of memory priority of some objects, where persisted objects have two classes: some are persisted to file and removed from memory because they are known to be accessed very infrequently, and others are always kept in memory. However, this may complicate the very simple nature of
Prevayler.
Good point. How should we deal with history?
Mihies