The 2700 is ideal for overclocking and should hit 4.2GHz like any other zen+ while the 1800x will be 10% slower (when equally pushed). It will also be much noisier as reducing overclocking isn't nearly as efficient as boost. XFR enhanced and precision boost go out the window anyway on overclocked systems.
If you want a loud speed demon, then the 2700 begins to make sense. If you are willing to give up a bit less than the last 15% (which uses naive clock scaling which you won't get) then get the 1800X. I'm a bit unsure of the difference between the 1700 and the 1800X once boost is accounted for, if you are adding an aftermarket cooler anyway the 1700 might make more sense.
Best guess is that this is a scavenged Raven Ridge chip and the Intel parts could easily be built on a 2 core mask. I don't think AMD wants to sell many of these (they can make the 2200G/2400G and various mobile chips for the same price), otherwise they would have at least provided "Vega 6" which would presumably match the G4920 (a naive scaling shows that the AMD will perform twice as slowly as the Intel part).
AMD is probably feeling out the market and getting resellers used to working with them. While this is almost certainly a Raven Ridge chopped up, don't be too surprised if they produce a similar thing in 2020 (or 2021) that doesn't require such butchery. Intel has been stalled at 14nm for 5 years (with certain improvements, but no real scaling), and AMD may well plan on shipping 7nm parts for another 5 years (which would justify creating a separate mask for 2 core parts. Or perhaps they won't see a reason to ship below 4 cores, much like most ARM phones have at least that many ARM cores). Whatever the case, AMD has had to make do with very few specific chips for all their SKUs, but this might change soon.
On the flip side, Intel uses as few masks as possible. I think at one point they were using the same mask for celerons, pentiums (back when that was the name of their main desktop processor) and xenons. Doing one thing very well and repeating it as often as possible has made Intel very, very rich.
Expect to see cheap laptops like this sooner or later, possibly even at a lower price point (smaller 1366 x 768 LCD are cheap). Unfortunately, also expect the single stick of memory. Of course with half the cores and less than half the GPUs, it might get away with half the bandwidth. Single core performance will certainly suffer with only one channel, and that will be much more painful on a system like this. I'd also certainly prefer a $25 Inland Pro 120GB SSD to the rotating drive listed here (and it could also fit in a laptop easier).
I had to check the cheapest Optane I could find, and came up with a 16GB NVMe that cost more than the 4GB RAM stick, so I doubt it is worth it (although for jobs that need 4-16GB it would work surprisingly well. Just don't expect to justify the cost beyond 8GB.
There have been tales of nvidia dumping stock on their board partners, best guess is that is what is happening. According to the infallible wiki, the 570/580 are built on 232mm2 of 14nm process while the nvidia is made on 200mm2 of 16nm process (which are close enough to be the same, these numbers don't tell you all that much), so cost to manufacture might as well be the same (not so with Vega and HBM2).
After a quick check, it appears a UK market issue. Here are the prices in the states (topmost selected):
1060 (3G): $215
1060 (6G) $278
Can't tell you when economics, politics, and marketing will come to their senses. In the US, the 580 is the obvious choice. I mentioned the used market, but used 570s and 580s are some of the most heavily used mining cards out there, it might make more sense to simply jump to a used 1080 (less loved mining card as it is hungry for electricity).
This looks "too good to be true", but appears to be the only "US pricing" in the EU, I'd think you can get it before brexit is official.
Capacity is most important, anything overflowing to the HDD is going to be slow. Gaming with long level loads might be able to tell the difference between SATA and NVMe, but there and booting are about the only times you could tell (which might drive people to the ADATA NVMes).
Until recently I'd have said that DRAM buffers are required (leaving them out was the mark of "too cheap" drives). The ADATA XPG SX6000 doesn't appear to have any, and doesn't appear to hurt performance (although check reviews, I haven't slapped my money down on one).
There's little point to RAID0 with SSDs, SATA drives have a single high speed link to the motherboard, cheap NVMe (like the ADATA mentioned above and Inland Pro) have two links, the expensive NVMe have four. Using two cheap NVMe in parallel might have some advantages over a "real" NVMe, but I'd worry about confusing the "drivers" (they tend to need to use system DRAM) and you also have twice the likelyhood of losing all your data (over the danger of the "cheap drives"). I wouldn't go this route.
The point of RAID1 is to keep going in the face of a lost drive. Since human error (including "I didn't put that crypto-ransom virus on my machine") is more likely than drive self-bricking, this really isn't a good backup method. Get an extra HDD, they are cheaper and much more reliable (if you remember to back it up). Leave fancy drive mirroring to enterprise hard drives to enterprises (of course, if your income stops when your hard drive stops, your computer is an "enterprise system". Pay what you need for reliability).
If you are at all interested in AMD's StoreMI software, it is pretty picky about "one fast" and "one slow" drive. It is also available to everyone else at $40 (small drives) or $60 (up to 1TB of "fast drive") http://www.enmotus.com/fuzedrive
I'd also look into the possibility of using 3 separate cards (presumably ~1080 or so) and giving each monitor a card. This should avoid those pesky SLI issues, although it isn't clear how this is suited for each game.
There's also motherboard compatibility. Few motherboards supply even x8 to three cards, so you would likely find one (or more likely two) cards being fed at x4 (which might not hurt as much as you'd think).
This seems to be going out of favor, and limited googling didn't tell me anything about this, so it might not be the possibility I thought it was. But don't expect simple solutions (other than using a 1080ti and accepting 40fps).
This seems a bit much. I also can't remember ever hearing of specific accounts required to download drivers tied to your specific board. Perhaps they are tired of people flashing boards with "pro" bioses. It also looks like the 2080ti is the same chip as the "pro Turing" meaning that flashing the bios could make a huge difference in raytracing power (several thousand dollar differences).
I don't recall Intel and AMD expecting fans to pre-order a month in advance and refuse to give any form of benchmarks other than "it's twice as fast. Trust us."
When you attach a TV to a computer you have to be careful about latency and framerate. Things to look for are "refresh rate" "refresh rate technology" and "game mode".
Ideally it will include reviews of people who have used it as a monitor, or at least attached a console to it. Even if you aren't gaming, you don't want to mouse to respond with a lot of lag (even wimpy cards should be able to do non-gaming things at 4k@60Hz, although your notebook's HDMI interface might be an issue). Basically anything with 60Hz response rate and a "game mode" should be fine (although read all the fine print to make sure that "60Hz" means 60Hz and not slapping some motion blur on a 60Hz signal displaying 30Hz. Until recently that was the common case among potential 4k TV monitors).
Then don't worry about it. 5600rpm is probably better.
While they say that every year, I've never seen the levels of hiding the data before. Can't tell whether this is because nvidia shoved a ton of unsold chips at their resellers or because the performance isn't there.
Nvidia will get maybe 10% improvement from the half-node process (the 7nm process is the one to watch, and that should be ready soon. Expect AMD to delay and delay Navi and nvidia to wait for them to use this delay to sell 20x0s before shrinking them to 7nm 21x0s). You'll also get a boost from the GDDR6 ram. The rest will have to come from bigger (and more expensive) chips (that are already full of specific "RTX" circuitry).
The Alienware is a pretty extreme monitor, and I'd expect that a RX580 wouldn't go much further than 1080@60fps. I'd look at a used vega 64 (even with all the mining issues) and expect to upgrade again to Navi in a year or more.
If you are going with the RX580, I'd recommend keeping the monitor and upgrading both when Navi drops (or possibly a better monitor with gsync and a nex-gen nvidia).
"Because the RTX 2080 Ti will be definitely faster than the GTX 1080Ti, otherwise Nvidia wouldn't bothered to make new gen. GPU that would had the exact same performance as his the previous fastest consumer chip."
Nvidia has clearly stated that the whole point of these boards are "RTX technology". They are willing to show the thing delivering 30fps at 1080 (while including some raytracing) but are adding never before seen locks to make absolutely sure that no real numbers leak out between taking customer money and delivering said boards. The raw shading FLOPS certainly appear to at least line up right to your assumptions, but still doesn't justify the secrecy and control nvidia feels necessary to hide exactly what these cards actually do.
They forced a lot of GTX stock on their resellers, maybe they don't want to "Osborne" them to death (although I can't see many resellers dumping nvidia over anything).
A quick check (all values from pcpartpicker, not meant to be better than a basic comparison): I see the drawbacks to overclocking are already well stated.
Intel i5 8600 : base: 3.1GHz, max turbo 4.3GHz $215
Intel i5 8600K: base 3.6GHz, max turbo 4.3GHz $250
AMD Ryzen 5 2600: base 3.4GHz, max turbo 3.9 $165
AMD Ryzen 5 2600X: base 3.6GHz, max turbo 4.2GHz $225
Looking at this list, the i5 8600 appears ideal for a chip you don't want to overclock. You get all the single (or low core) core speed the 8600k is capable of producing (unclocked) [and all the power the AMD chips are capable of hitting at full overclocking limit] without endangering your processor.
With the 8600k, you are paying $35 for a chip that can go slightly faster with all cores chugging away (although if you regularly use more than 4 cores at >3GHz, I'd recommend an AMD. But this is pretty uncommon) but the real reason to buy it is .5-1.0 GHz higher with overclocking (depending on how extreme you want to go).
On the Red Side, the 2600X is also a good choice for not overclocking, as that 4.2GHz boost clock is about as good as you can get an AMD to do. So it nearly matches the 8600 in low core situations (the common case) and can provide 6/12 cores/threads at 3.6GHz vs. 6/6 cores threads at 3.1GHz in heavy core situations. The 2600 is ideal for overclockers as once it is overclocked it should match the 8600 in flatout single core, although it really can't match an overclocked 8600k (when using all the cores the AMD's extra 6 threads might help, but that doesn't come up in the common case).
I wouldn't be at all surprised if Nvidia releases a 21x0 series (on 7nm) before AMD's next generation. TSMC is presumably making final silicon as fast as it can with Apple's next iPhone to be released (and sold to those waiting in line).
I'm a bit surprised that nvidia decided to launch Turing right as 14nm becomes a "trailing edge" technology. They must really want to get raytracing out as soon as possible, because if they wanted fast cards they could have simply waited a month or two (and why didn't they ship consumer Volta to miners when they had the chance).
If you are running an RX480 and are looking only to AMD do you have a 1080 monitor? Are you running a 4x in 1080, or possibly a 1440 in either resolution? I'd expect to match a Navi with a 4x with freesync, but I'm not sure how available they are (hopefully cheap freesync 4x TVs will be available by then).
The floor should fall out of the used 1080/1080ti market in about a month, making 4x a real possibility. If you are holding out for a Vega, I'd assume that Vegas should fall as well, but post 9/30 nvidia's are less likely to have been abused mining (they use too much power and I'd assume that most of that wave were gaming cards. The Vegas may well be miners finally admitting that their cards aren't worth all that much.
It all comes down to how long you can wait (and what resolution/framerate you want). Waiting for AMD always seems to be heartbreakingly delayed, and I'm sure that nvdia will want to time their next generation to compete on an even basis with AMD. Reading carefully into Gilroar's link we notice that while AMD still claims to release Vega20 [the 7nm vega] in 2018, they have only taped-out and don't have samples. Time is running out guys. Also they have taped out Epyc [on 7nm Zen 2, current ryzens use zen+] and plan on releasing in 2019 but don't say anything about a 7nm Ryzen (which I thought was supposed to be on GF, which might take some time to fix). I suspect AMD's priorities are:
Epyc (zen 2 server)
Ryzen (zen 2 desktop)
PS5 (who is paying for Navi, contracts might even put this higher than Ryzen)
That said, I don't expect anyone to come up with a better process for manufacturing chips than TSMC 7nm for at least 5 years, so it will be hard to go wrong once AMD and nVidia put their (7nm) boards on the table. If the Sony money manages to provide enough R&D for AMD to finally catch up to nVidia, Navi should be the heart of some great cards. Just don't hold your breath.
I have to believe that nvidia is shooting themselves in the foot. Also, why would they even release premature drivers unless they came up with this crazy system until just recently? Are the leaks using GTX1080 drivers or something (possibly Titan V)?
Apparently nvidia is convinced the faithful will ignore leaks of unapproved sources of information and only accept those that swore to only tell nvidia's side of the story in return for samples and official drivers.
Do you have an SSD to boot off of and primary gaming? If so there are two speeds: SSD (fast) and HDD (slow) and the difference between 7200 and 5600 won't be noticed. If you don't have a SSD things get hairy and 7200 would help, but not as much as having any SSD.
In general, 5600rpm just means less noise, less power, less heat. They tend to be better all for modern HDD tasks (bulk storage, backup) even if they transfer data a bit more slowly (and if you are streaming media from them this is a non-issue).
If 8 are the same, I'd look into this: https://www.newegg.com/Product/Product.aspx?Item=9SIA4A05N59112 It should be able to handle RAID5 or 6 (couldn't be bothered to check why newegg said "50/60").
difficulty: you almost need a SLI motherboard to have a free x8 PCIe slot.
Note: You could almost buy a 1TB SSD for the price of that card. A four port card will set you back the same as 256GB of SSD, don't buy one for obsolete SSDs.
How is windows installed on the old system? If you bought the system pre-built or bought an OEM edition of windows your windows license is tied to the motherboard and you will need to keep that windows with the motherboard.
If you have the install discs, just install away.
If it was pre-built, I'd look for a "recovery partion" and copy the partion from the old drive to the new drive, then try to reinstall windows (it might not like the size of the hard drive, you might have to play a few games to convince it to work especially regarding the MBR [master boot record]). If this doesn't work (and the thing has a recovery partition) it might be best to let it keep the drive. Just copy everything over to the new drive (preferably) wipe the old drive (except the recovery partition) and do the recovery method.
[some cloning programs, especially needed if you need to move the recovery partition]
On September 30, plenty of people who simply have to own the latest and greatest will suddenly have a bunch of extra 1080tis. Note that 1080 and 1080ti weren't exactly the best mining cards around (they like to eat electricity) so there is a certain premium, and I suspect that the rest of the chips that nvidia is dumping on partners aren't 1080tis (the issue is more likely 1030s that can't compete (especially when tied to Intel processors) with AMD 2200G ALUs and 1070s (and 1060s) that can no longer be sold to miners.
If your HDD is larger than your SSD (which I'd expect, but you never know with pre-built stuff), you will have to shrink your filesystem. Hopefully your cloning software will do it, but it should be an option in windows as well (you may have to defrag things a bit first to get everything away from the top of the drive).
I'd look here:https://www.macrium.com/reflectfree for copying things over, hopefully it can deal with different sizes.
I normally favor https://clonezilla.org/ (it is completely free and includes things like differential backups that reflect makes you pay for, but dealing with windows filesystems isn't a high priority (and thus it tends to only work copying to equal or larger media).
So still called "navi" then, ok. I was wondering how much time would have been lost.
Also, expect a "fabric" design from Intel. Not just because Raja, but also since it is an ideal use for EMIB.
Make sure whatever you get can input and output at 60Hz with minimal lag (look for things with "gaming mode" and such, or reviews of people happy with the thing as a monitor). Even if you don't game, some of the older TVs made lousy monitors thanks to mouse lag.
Any 4k monitor had better do 1080 well, that's 95%+ of the content available (well, 1080 and under). Doing 1440 might be an issue, but 1080 is simply uses 4 pixels for every pixel: it is a 1:1 fit.
TCL is the up and coming brand name in this field, and the last year or so of TVs seem to make good monitors (assuming you can deal with 43" on your desk). Vizio is a more established name, but I haven't been following them as closely. For more expensive TVs I'd suspect you can get better specs from a "monitor", but that's outside of my price range anyway.
Looks like the 2080ti is the "turing compute" chip used for high end raytracing. If you are buying the $6000 card you should look for motherboards capable of tying the thing together (this was even more important with Volta). If you are buying the $1200 card forgetaboutit.
AMD never intended to release a gaming chip before 2019 as they certainly want to use TSMC's 7nm process. I think the first chips to use it will be the next iPhone chip (to be demoed in 2 weeks). AMD also intends to make the 7nm Vega on TSMC this year, although it isn't clear why (when AMD was the only customer for GF 7nm it made all kinds of sense to run that first before Zen).
It is quite likely that Vega was sacrificed to Zen and Navi, although it isn't clear if Navi will benefit from that (depending on how much was thrown out). AMD has had more money to spend on Navi than any GPU for a long time, so we shall see.
The other side of the coin is that nvidia obviously can use 7nm as well, and can afford to shrink Turing down far more than AMD can afford to shrink Vega. So don't be surprised if 21xx cards show up in 2019, and certainly ready to compete with AMD in 2020 if need be. I'd be surprised if 21xx doesn't become the "real" STX card for the immediate future (although given the revolutionary nature of the thing will probably need another 7nm refresh a year or two down the road).
One of the reasons I expect 7nm to be a big deal is that I don't expect 'the next big thing' (presumably 3nm) to be ready for 5-10 years. In Zen's case it is a chance to level the playing field with Intel, but for GPUs it may well be a good place to "buy and hold" a GPU (and get real "future proofing*")
The RPMs tell you much more. But if you care about the speed at all, you should just pick up one of the SSDs that Allan_M_Systems mentioned (or perhaps inland pros from a local microcenter, they are a bit better well known).
If it is a media center, just shove the largest drives in (any drive can feed faster than any media needs to stream) and possibly add a SSD boot drive if performance needs a boost.
In the back of my mind, I think "square the cache, twice the performance" was the rule at one point, but for caches in general and not hard drive buffers (8MB seems the standard, 16MB was probably simply to fit the available chips). Just assume performance scales linearly with RPM and you'll get a better picture. And SSD is wildly faster for anything more recent than an Athlon64.
this looks impressive. I generally can't argue paying a premium for NVMe, but this goes at a price similar to a MX500 SATA drive. Not sure if you have a free NVMe slot, but you didn't put much into your question.
I'd go with the ADATA SX6000. It is hard to notice the difference in speed between a NVMe and SATA, and I'd expect the speed of the SX6000 to be at least perceptible during booting and level loads (seeing the difference between the SX6000 and the 960 evo will be much harder).
Having the SX6000 the same price as the Crucial SATA (generally a good price for an SSD) makes it a no brainer.
WARNING: the SX6000 doesn't include DRAM. This makes a lot of sense in a windows system when 10 cents of system ram can be used instead of adding $10 to the price of the NVMe. If you use Linux and have a lot of weird partitions, this might not be a good deal.
I couldn't confirm this at all (one hit claiming "Navi was bad" and "would come out in 2020" long before repeated official claims of the standard roadmap. Granted, this pushes AMD's next architecture (and gaming chips) out to late 2019 or 2020, but that is a given based on Sony paying for "Navi".
Given that AMD is developing their next GPU architecture for Sony, whatever they design had better be ready for 2020. Whether old "navi" or some completely different single chip "navi", the timeline won't change.
It would be unfortunate that AMD didn't get a 4+ years of steady development to make a really great chip, but there are similar tales of Sony botching the PS3 architecture (expecting cell to do everything) and had to do a crash project adding a nvidia GPU to the thing). I'd expect some sort of updated gaming GPU on 7nm by 2020 (they say 2019, but AMD is always late).
I'd also suspect that depending on how late nvidia thinks AMD will be, they can delay their 21xx series a bit more. They have to choose between killing 20xx too fast, and letting AMD take the lead. I'd assume that an early 21xx might be ideal, although August launches seem to make more sense (back to school and holiday times).
The only available data is here: https://www.backblaze.com/b2/hard-drive-test-data.html
The catch is that they kill drives through heavy use and somewhat elevated temperatures while consumers kill drives by turning them off and on. I'd take the backblaze data more seriously if building a NAS.
That said, I'd avoid the Baracuda (which uses the x000DM line that backblaze had trouble with) and go with some of the 5400rpm Constellation drives instead (sure they are slightly slow, but they are also less noisy and anything where you care about speed should be on SSD anyway). Oddly enough, Backblaze had lousy results with their Western Digital drives, especially their 2TB ones.
I'd also recommend looking at the Seagate - Constellation ES.2 3TB, which gives you an extra TB for no more than $10 more (less if you were looking at Baracudas).
It would be best to break your storage needs down into two piles:
SSD: generally any "name brand" (i.e. has a DRAM cache). I'd buy capacity first, speed second (it doesn't look like you are running anything you'd really be able to tell the difference between SATA and NVMe). I'd also consider a ADATA XPG SX6000 (a basic 2-channel PCI-e job, the fast stuff uses 4 channels) to get most of the (boot up/level load) benefits at nearly a SATA cost. Difficulty: you can't buy it in 1TB.
HDD: Looks like 4TB isn't enough. One obvious choice is to buy a second 4TB and RAID(0) them together. Also don't forget to buy enough HDDs to back up all your storage: you certainly have the budget for it. Note that the AMD SSD caching software doesn't like working with more than one drive, and I suspect the various 3rd party programs that do the same thing aren't heavily tested with weird configurations (I'd be strongly tempted to have 2 512GB SX6000s and 3 RAID5 4TB drives with that budget with a similar RAID array in backup). My guess is that your budget is way too high for what you are trying to do, but I'd recommend the ADATA NVMe and 3 4TB HDDs (your choice). Note that buying the backups mean that you create the RAID with two new drives and then copy all the files from the existing drive, much less complications.
If you really want to reuse your DDR3, I'd recommend a i5 2400 ($90, 3.1GHz, shouldn't need watercooling).
If you want to go crazy with more cores, try a Xeon E5-2670 8 cores 16 thread 2.6GHz - runs $140, which might cut into your cooler budget (it is only 115W, so doesn't need as much as the 9590 and presumably air cooled). I also can't begin to price the motherboard and tell you if your DRAM will work (it probably wants ECC and/or registered).
If you want a modern CPU, go with an AMD ryzen or possibly Pentium/Celeron/i3 (assuming you can clock a few cores high enough) and expect to buy the DDR4. Allan_M_Systems has the numbers for the modern options.
No, only an ealier (mythical?) 9590 that included a AIO water-based heatsink. And then you would want to scavenge the heatsink on something worthy of the device, and underclock the 9590 to deal with whatever heatsink you used.
The thing merely existed to touch the 5.0GHz "barrier", if only on a single core in ideal situations. It never made sense as a processor.
Generally speaking, the Bulldozer only made sense when you needed 8 dirt cheap threads and/or you didn't care about processor speed past a tiny minimum (which is true for games more than you'd think, and certainly even more true during Bulldozer's day). You'd still typically be better off with an i5 or i3, and nearly always better off with an [you could probably afford an unlocked when it was originally released] i5 than a FX-9590.
If you got confirmation on the order, you got them before they sold out (or perhaps you will wait in line for awhile if you bought from newegg or amazon, who knows). And it looks like it was just the 2080ti.
Maybe they will ship more to their partners than their customers, but you can't order a 2080ti from nvidia.
That certainly isn't a good sign. I'd suggest running more than one diagnostic. It might be using a microsoft file estimation routine, but if a diagnostic returns any bad sectors the drive is dying (bad sectors used to be common, but eventually the drives started doing it in hardware thanks to less savy consumers being offended. Seeing bad sectors now implies that it has more bad sectors than it can handle internally).
3 sweeps is overkill on modern hard drives, they run on the edge of data loss (fortunately with enough error-correction to avoid losing data) all the time and it shouldn't be possible to recover data that has been written once (preferably pseudo-random instead of zeros).
Mirroring just means that if you delete the data on the drive, either by direct carelessness of directly deleting it or by indirect carelessness of installing malware you can lose all your data. Stupid user tricks are some of the biggest dangers to data. Follow Root_user's advice above and buy at least two drives and keep at least one of them separate from the machine.
Note: you can try "enterprise drives" but in practice they are likely worse as they are intended to be powered and run 24/7. You probably want at least one of the drives in a desk drawer most of the time. And if you managed to find a "ultra-high reliable drive" if you opened it up you will simply find multiple drives inside. That's how engineers build reliability in such things. It is way cheaper just to buy 2 [consumer] HDD drives yourself, although if your NVMe combines to a reasonable size for SSDs (typically less than a TB) you could use SATA SSDs if you really wanted to.
As far as I know, the first batch is sold out. You won't get one on September 20, and who knows when the next batch will come.
Pretty bad, considering that a RTX2070 is supposed to cost $500 ($600 with the founders tax) and has rendering specs nowhere near the $450 (available now, at that price) GTX1080. You need raytracing to possibly justify buying the card.
CUDA Cores 2304 2560
Core Clock 1410MHz 1607MHz
Boost Clock 1620MHz 1733MHz
Memory Clock 14Gbps 10Gbps GDDR5X
Memory Bus Width 256-bit 256-bit
VRAM 8GB 8GB
Single Precision 7.5 TFLOPs 8.9 TFLOPs
data from https://www.anandtech.com/show/13249/nvidia-announces-geforce-rtx-20-series-rtx-2080-ti-2080-2070
The 780 to 980 had about a 20% increase, and was also a "same process" architecture improvement card (they also added more pins to memory). These cards may provide that but they will also expect you to pay for both that and the raytracing circuitry which will be used who knows when.
The marketing they splattered around yesterday indicates that 1/3 of the chip is spent on raytracing specific hardware. Expect to pay a lot to exceed the last gen.
Note that any claims of "980 to 1080" are wildly overblown: that was a full node (28nm to 14nm change). This is 14nm to 12nm change, exactly the same as we saw in Ryzen to Ryzen2 (about 10% improvement, which really helped close the gap to Intel. But I'm not sure that many people were excited about jumping from Ryzen1 to Ryzen2).
AMD is planning 7nm chips, which should get a similar boost as from 28nm to 14nm. I can only assume that nvidia will be willing to shrink these down as well, and am really wondering how long this generation can last. WARNING: waiting for AMD can last a very long time, and the first 7nm (overpriced pro) card will have no relation (it will be VEGA made by GloFo) to the consumer 7nm cards (RIVA made by TSMC). This might give nvidia to sit on the next generation and try to make some money on this generation.
One final note: if these cards give any type of value in ordinary graphics shading, nvidia's investors will never let them hear the end of not releasing these cards during the mining boom. Volta was two years ago and TSMC could have manufactured these cards (they might only have GDDR5 available instead of GDDR6) anytime within the last two years, but somehow nvidia missed the mining craze with them. But if the raytracing drives the price up it will be a moot point (and won't work any better on current games).
Sounds good to me. As far as I can tell, Pro Tools 12 doesn't appear to do any CUDA work (using the GPU for math acceleration). This is pretty annoying, as even a GT1030 should be able to do more fp32 flops (even when combined with a Celeron or similar) than the mighty i7.
https://www.reddit.com/r/protools/comments/5y0ood/gpu_acceleration/ [only real hit I got for "cuda pro tools 12"]
The only reasons you need huge Quadro cards are for CAD drafting work or numerical calculations that you want ECC (error correction on the memory). Since all the action happens on the CPU, there's no way to bottleneck on the GPU.
That would work. Maybe a 660->760 jump would be even better, as that was the first 28nm to the second 28nm architecture. All I remembered was the 6xx, 7xx, 8xx, and 9xx all were on 28nm (some of the earlier low end boards were using the old processes).
Developing on two 1280x1024 monitors? Absurd!
I am having trouble imagining anywhere that would have space constraints such that a single much larger monitor wouldn't be better, and couldn't have been purchased along with the GPUs. Is it mostly used remotely and you simply had a second 1280x1024 slapped on for slightly easier maintenance duty?
960 to 1060 was a leap from a 28nm node to a 14nm node (full node leap). This was the type of leap we got used to every 2 years when Moore's law held. This would be a14nm to at best 12nm leap (and don't expect all that much from a "half node"). Look to 860 to 960 for a better example (28nm to 28nm process).
AMD is schedule to ship on 7nm next year (knowing AMD, late* next year if that). TSMC claims that this node will be great (competitive to Intel's 10nm. The one they are still working on) and I'm shocked silly that nvidia is running on the same process they shipped Volta on two years ago when a new process is imminent (presumably available for Apple, but nVidia should be able to shove themselves closely behind Apple).
Nvidia hasn't said anything yet and won't say anything until 12:00 EDT, Monday, August 20. But they've promised the Sun, the Moon, and the Stars every card since 1996 (usually delivering, but a couple launches have been rebranding of old cards. I wouldn't be remotely surprised if the GTX2060 is a rebranded 1080+GDDR6 [assuming it hits those numbers]).
If the CPU is limited in games, I'd rather go with a 2xxx series AMD, even if it has less cores. Of course if the issue only lies with gaming+streaming and never just gaming, then the 1xxx might be better if you need it for 8 cores.
One thing with AMD is that if in 2 more years you want more CPU, the Ryzen 3 should be compatible with the motherboard.
Who cares about the PCB, all the action happens on the GPU chip. The PNY leak I saw had it as ~25% faster (CUDA cores * clockspeed, a zeroth order approximation) while taking up 60% more area (using the rumored 754mm**2 vs. the known 471 for the 1080ti). If it doesn't take up 60% more area (and is more than 60% more expensive to make the chip), don't ask me how they do the raytracing.
Don't expect it to be cheap, especially if nVidia has to clear out a lot of old inventory (note that those with good connections seem to take that as gospel, while I'm only really aware of low-end non-mining cards being returned: the ones that can't compete with the AMD2400G/2200G [of course they can blame Intel just as much]).
At this point, all I can do is wait till Monday (considering the news is supposed to break in Germany, it should be posted when I wake up).
They are using the same process as Volta (a tiny improvement over the Pascal-based GTX1080 ti and nearly identical transistor performance and size) and filling it with lots of raytracing hardware. Something has to give, and it is probably standard rastering performance (or possibly price, but don't expect 25% more chip for the same price)
If you want to make current (and next year's, and probably the year after's) games any faster, wait for the [rumoured] GTX2080ti. Of course, such might just be the RTX with the raytracing disabled, but who knows. When somebody ships a game with an engine built around raytracing, then you obviously want a RTX board.
Also note that the leak is for a GTX board. It isn't covered with lots of raytracing circuitry. Although I don't really expect it to be cheap, assuming it performs as shown (investors will be screaming at them for not selling during the mining boom).
Expecting to get both higher performance rastering and lots of raytracing circuitry on the same board is silly. Especially when the process (i.e. the transistors) hasn't significantly changed.
HINT: the process should significantly change next year, but it is possible that nvidia believes otherwise and thus dropped this on 12nm. Or possibly that all previous generations were "close enough" to previous architectures that they were happy launching a new architecture and new process together. Turing's new raytracing may be sufficiently new that they wanted to release it on an architecture they already have experience with Volta.
PS: wccftech will tell you what you want to hear. Check back and they'll tell you it was supposed to launch in 2017 as well...
So the question comes down to how to deal with the data moving to a new machine.
The easy answer: copy anything you want to save on the 128GB SSD onto the 4TB drive. Remove 4TB drive and place in new improved[?] desktop. Done.
How to make this hard:
You want to expand your horde of "guitar video and music". Easy (and slightly expensive) way is to simply buy a huge HDD (Toshiba sells an 8TB one for $200). You could buy another 4TB drive cheaper, but combining them into a RAID0 might not be safe, so a lot depends on how much you hate moving things around. Also don't expect any fusion software to work with more than one large HDD.
You are tired of moving your music/videos over to the HDD. Obvious solution is the emotus software already mentioned. The big difficulty would be doing so without erasing either the old music/video drive or the new boot drive. This is a problem with pre-builts. This solution also only works with one "fast drive" and one "slow drive", so you would likely be stuck with the original drives.
Finally, I wondering if these "prebuilt desktops" have 1TB SSDs in them. That's quite a bit, but certainly possible. 1TB HDD is a bad sign, as the second TB is typically ~$10 or less (and you can use the TB, even if most people don't). If it has 1TB HDD I'd really question if the parts compare to an 8 year old desktop, especially if you have a Sandy Bridge-based CPU (they came out in 2011, so it is hard to tell).
There really isn't any benefit to putting your music and video on SSD. You simply start at the beginning (or possibly your choice) and keep reading from that point on at a speed any HDD can keep up. For something like windows, games, and other apps, you tend to need to read little tiny bits scattered all over the drive. Then you want a SSD.
A RAID0 of 4 or so SSDs only means that when one of them fail you need to pull your backup off your 4TB HDD (you kept it current, right?). It won't speed up how you access your data at all. Eventually, it will make more sense to simply move all the data over to SSD (expect 4k video to expand that to 16TB before this happens), but we won't be there for some time.
Obviously doesn't apply if they ship a Turing-based GTX2080 built around rasterization. Still expecting something like that next year, but very confused by nvidia.
I'm really wondering if anybody ever used the T&L features on their original GeForce, or had already replaced it with Geforce whatever by the times games shipped with such a feature. I remember it being unused for a long time. Boards have much longer lives nowadays, but games take even longer to develop and I expect more improvements to raytracing engines.