Real-Time ray tracing games to ship 2022... and be delayed to 2024
Not to say we don't need to get there. Expect AAAA level visuals done with unity-level costs. So instead of yet another indistinguishable sequel from EA and Blizzard/Activision, we get indy stuff with awesome graphics. EA will continue to exist as long as people remain convinced that just because people keep chasing the experience they had with the original in sequel 28 done by a third different studio, but there will be new studios and new games for them to gobble up.
What are you trying to do with it?
In general, anything remotely modern should have a SSD, although trying to go through the hoops needed to copy a hdd image to a ssd is not for the fainthearted (If you're stuck with a 500GB HDD [all too common with your class of CPU], a 500GB drive will fit your budget (you might need to shrink the HDD ever so much if you have a 512G->500G combination, if it is close, just make sure one will fit the other).
I don't think replacing the CPU and keeping the motherboard (and RAM) is realistic (the RAM being the important part). From googling, the chips appears to be a FM2+ (the DDR3 would be the giveaway), and most CPUs will be equally obsolete (although this might let you pick up a bargain on ebay. Just don't pay much for such a CPU).
Do you have 4G (or less) ram [happens a bit with those CPUS]? That might be more important than a SSD (I know my dad's seriously obsolete notebook didn't improve with a SSD, but that wasn't nearly as bad a your AMD A10).
Looks like $2300 for the lowest priced one (60% of the big boys), $6,300 for the "all the raytracing" (and twice the memory as entry level) and $10k for "all the raytracing and twice the memory" as the middle one. I guess that memory does enough market segmentation on its own.
Still, that $2300 will certainly buy you more raytracing than the Titan V (more than double). Just don't expect to match it in double precision floating point (i.e. scientific calculations). No idea how good it will be for machine learning (I can't help but think it will get slashed by 60% as well, can't have it competing with the top end).
Specifically, next information drop is next Monday (assuming you don't want to spend at least $3,000 on a Quadro). I wouldn't count on cards being available Monday (pricing would have certainly leaked from someone). I just wouldn't make a decision before then.
If "getting Microsoft onboard" includes "making the GPU for the console" then it isn't a problem (although who would make the CPU for nvidia is another problem). As far as I know, Sony paid for most of AMD's Navi design, so I wouldn't be surprised if Microsoft hired nvidia. Nvidia's cost and hostility with other CPU makers is still a problem, although they might buy this contract to push raytracing.
I doubt game engine designers will be enthusiastic about using raytracing on Microsoft consoles if they don't have hardware raytracing available. Especially when nvidia's favorite calls are known to have emulation that falls flat on its face on any other product (see any "the way it was meant to be played", ever).
Good chance it will be at least $3,000 if they simply help themselves to the Turing chip. Even worse, I'm not sure they can cripple ECC and still expect the Turing market to buy full Turing instead of the RTX2080, so possibly they will have to cut down the 2080 (except that it will be just as expensive to manufacture, so maybe just a premium on 1080ti prices).
Do graphics houses really care about ECC? I'd expect them to simply run a low-pass filter over the result, then redo anything that isn't within a few percent of the result. As a bonus, you can sneak cheap anti-aliasing along with this trick as well. If the graphics houses don't want ECC, they will have to come up with some other way to market segment Turing proper from RTX2080, and I suspect that will cripple 2080's raytracing abilities (which is what they are selling with Turing).
So expect a Titan V with somewhat better raytracing at a Titan V price (maybe a little better thanks to GDDR6 vs. HBM2). Note that raytracing improvements can't scale better than a RTX8000Quadro: meaning that nvidia knows that if you can get as good raytracing from 4 RTX2080s as 2 RTX8000 Quadro, the RTX2080 will have to cost more than 1/2 a Quadro or they won't sell any Quadros ("the more you buy the more you save" has to be true in nvidialand).
I'm still trying to figure out why this is on 12nm (or close enough) and why they didn't wait for 7nm. I'm still expecting the next generation GTXxx80 on 7nm, but am greatly confused (maybe they know something about TSMC's 7nm that absolutely nobody is talking about).
The only time AMD ever pulled that off is when Intel abdicated leadership of x86 and allowed AMD to dictate the 64 bit extensions. Pretty much since then, few followed where they lead (maybe Mantle/Vulcan/DX12?).
Nvidia can at least lead in the PC games that ignore casuals (Intel graphics and/or whatever old nvidia graphics were included in the notebook) and consoles (AMD graphics). Not sure that such a thing is going to be a seriously big market, although raytracing is supposed to not require the extreme special case-codes that AAA+ games rely on for their graphics.
Expect a long, slow replacement of rastering cards with raytracing cards (more than enough time for AMD to jump on the bandwagon). T&L took a good long time to catch on after GeForce originally launched, and then I'm not sure when you needed shaders (Doom3 was hardly a "must have game", unlike Id's previous three).
I wonder what the market is for Pixar, ILM, and weta. It might be a lot of money, but I doubt it is enough to justify something like this. Maybe they figured that they could break even selling to them (and smaller places using renderman and similar) and have a raytracing engine "for free" that could propel them into the next generation.
Confused, to put it mildly.
So this new chip has similar memory bandwidth to Vega, less transistors, less space (but not much less space, it looks like the process is the same), but plenty more features (mostly in a raytrace-only engine).
According to the chart here: https://www.anandtech.com/show/13214/nvidia-reveals-next-gen-turing-gpu-architecture (the bottom one):
RTX8000 (Turing) has 40.5 transistors per square mm.
GV100 (Vega) has 38.6 transistors per square mm.
This is 12nm, maybe 10nm, but not the 7nm state of the art process that TSMC is just starting up.
So how did they make Turing better? Anandtech shows that all the "impressive flops numbers" appear to be by diving the words into smaller and smaller slices (they loved to show off their 4 bit operations, which might work for some machine learning operations but is still pretty limited). They make a lot of claims compared to Pascal, but Volta is their latest and greatest competitor. But it does appear that they will be able to raytrace 4 times the power of Volta in raytracing applications only.
So it does raytracing four times better than Volta. And it does 8 bit and 4 bit math (two and four times respectively) better than Volta. No mention of double precision math, I suspect it got sacrificed.
From the looks of this, this is 2018-2019's device for raytracing. It is clearly also the device for machine learning with 4 and 8 bit operations, possibly combining them with FP16 and more precise math. I suspect that most of Volta customers won't want to upgrade at all, unless they can quickly modify their code to go from FP16 to 8 or 4 bit operations.
Which really makes you wonder what the whole point of the beast was. Was it simply too expensive to try to slather transistors all over hundreds of mm of 7nm process? Does nvidia have information that the 7nm process is hitting the same issues as Intel's (4 year late) 10nm process? Maybe they will turn around and shrink this into a 7nm process soon enough, they seem to have no fear of slapping compute chips out one after the other (unlike the video cards we've been waiting a few years for).
I'm also really wondering what they are thinking about raytracing. I'm guessing that should be left to the graphics engine writers.
"upcoming GPU from nVidia" There's one big catch:
TSMC 16nm [Pascal architecture] used by 1060-1080 available as of 2016
TSMC 12nm [Volta architecture] used by Volta supercomputer/machine learning chips as of 2017
TSMC 7nm [Turing architecture?] available when?
So if they launch a new set of boards before 2019, it has to use one of the above.
[Pascal Architecture] more or less a straight rebrand, probably using GDDR6 instead of GDDR5 for a ~15% gain (they may have to rebrand things "up" a notch to justify the new names. If they do, expect prices to follow the new names). Somebody is buying up GDDR6, although I really suspect Apple.
[Volta Architecture] it is available, but expect investors to scream if they sat on a better architecture during the mining craze and suddenly tossed it into the ring with AMD likely to release a 7nm less than a year later. Even if they want to spoil AMD, I'm sure they can get TSMC to produce their chips well ahead of AMD's.
[7nm chips, let's just call it "Turing"] It would be a major miracle to get them out in a month (or two). Expect to see the next iPhone first, and once they can make enough iPhones nVidia may get a crack at the fabs. This generation should have significant gains, but I don't know when it will come out. AMD is supposed to make a "7nm" board in Q4, but that is VEGA warmed over, an overpriced "pro board" (so they don't have to make many) and a GloFo [not TSMC-built] "pipecleaner" run to boot (i.e. they are expecting dreadful yield and just using it to learn how to make 7nm in volume). The AMD board is more useful to gauge how far they are from making Ryzen 3 available than any GPU, since any AMD GPUs will be made at TSMC.
I'm also expecting shenanigans to justify the need for the recent NDA scandal. Expect sites with reviews to insist it is the real deal, while the others point out that the emperor has no clothes. If nvidia does release such a thing [either Pascal rebrand or even a Volta], I'll be worried that TSMC has run into the same issues that have delayed Intel's 10nm (comparable or better than TSMC's and GF's 7nm, the number is mostly marketing) for something like 4 years. I might even bother to care about rumors of the next iPhone to see if anything is happening there (they are almost certainly first in line at TSMC).
A rebrand would be an attempt to "reboot" the MSRP, and presumably slap a founders tax on it. I'd expect the used market to stop falling as well if they would remain comparable to the "new" nvidia cards.
When people aren't paying $100 "founders tax" anymore.
Probably not. Unless the colors were chosen after the mining craze was obvious (and the miners didn't care about color), there's little reason to have more colors. I'm sure RGB will be necessary for any card advertized as "gamer".
I like weird storage setups and have been trying to come up with a better one for awhile, so I was already aware of most of the issues and only had to bang them out.
You should have mentioned the NAS. It made the idea of keeping 2TB of HDD much sillier, and made a RAID0 of 2, 1TB (I don't think dropping down to 4 500G would be any cheaper) make a lot more sense (you could just back it up on the NAS). Using the 2TB of SSD might make Optane caching realistic (caching a SSD, not a HDD), but I don't think many computers can take advantage of that speed.
What are your current framerates? If you crank the graphic settings up to you what you want, what are they then? It doesn't sound like you want to crank them much past ~70Hz.
A quick peek shows the 570 similar to the 1060 (3G) and the 580 similar to the 1060(6G). I think the prices of the AMD has fallen more back to Earth and I'd look for a deal with the 8G 580 (I think the 4G was the favored mining card). Even so, don't expect more than a 40% increase in frames over a 1050ti, and if you are near the limits of your monitor it can't be worth it.
PS: I'm more or less convinced that the rumored "this year's new nvidia cards" will be a rebrand of the old chips. They might use GDDR6 with slightly higher clocks to get the slight increases leaked (which might require renaming boards "up" a notch, not sure how they did earlier rebrands). I'd expect the "real" next generation boards to come out deep into 2019 when TSMC can make the next generation (7nm) chips for them.
[note that if you've already bought the 2TB SSD, I'd still recommend a 2TB HDD (or more, preferably 3TB) for backups. Also note that striping (aka RAID0, available on almost all motherboards) will make a 2TB drive out of 2 1TB drives, but that would make me want the backups even more as you lose all the data if either bricks. As far as I know, neither Optane nor emotus will cache RAID drives.]
2 1TB SSDs: ~$350
2 TB HD ~$50
StoreMI for non-H470/B450 boards: $40 that limits you to 256G (2G RAM), $60 that "limits" you to 1T SSD/NVMe and 4G
Inland Pro PCIe NVMe $60/128G $110/256G
Sandisk Ultra 3d $60/256G $90/500G
TB SSD: $350
enmotus software ($60), Sandisk 500G drive ($90), Seagate 2TB drive ($50) = $200
enmotus software ($40), Inland pro 256G ($110), Seagate 2TB drive($50)= $200
Optane with 118G ($200) Seagate 2TB drive ($50) = $250
Optane with 58G ($113) Seagate 2TB drive ($50) = $168
enmotus ($40) Sandisk 250 ($60) 2TB drive($50) = $150
So it all boils down to how much room do you think the "hot spot" of your drive will be. The other advantage of caching a HDD is that a 3TB will run you $70 and a 4T will run you $100 while the SSDs just stay extremely expensive at that end. In any event, I'd still recommend an extra HDD for backup, regardless of how much SSD you buy.
I'll have to point out that I'm not sure there's any point buying a 256G NVMe, I think the 500G system might be just as fast and hold more. That's my best guess for the sweetspot of this build (assuming you have 16G DRAM).
One final note: if this is a complete build you might seriously consider staying at 8G memory, taking one of the Optane options and partioning the drive to create a 8-16G paging file on it (assuming you can convince the software to do something that crazy). This should give you almost all benefit of 16G+ for considerably cheaper. I'd still want an SSD boot drive, which forces you to fight windows to stop putting everything on the small boot drive and move it to the HDD.
You would be better off asking this on a Solidworks forum, where people have more experience with the issues (the goal here is to make sure that Solidworks keeps using the Quadro, and hopefully get more power for gaming). I suspect that if you put a more powerful gaming board in the computer, you will have to switch a lot to convince the games to stay on board and Solidworks on the other.
What type of monitor[s] are you using? It might be possible to tie the Quadro to a center one and use 1060s for side monitors and set solidworks to only use the center. You also might wind up convincing windows to have "left and right" monitors that plug into the same monitor, you just tell Solid works to use the right and games to use the left, then chose your input on the monitor (I don't think this will help with a 1060: performance should be similar to the Quadro).
If your budget only fits a 1060 (and the Quadro has similar performance, as I suspect it does. A 5G 1000core Pascal chip pretty much implies it uses the same silicon as a 1060), I'd suggest standing pat and waiting for better boards. And I have little hope in nvidia releasing new boards in a few months (it doesn't make any sense. Perhaps they will just rebrand old boards "1180"), but sometime in 2019 I'd at least expect a new generation of GPUs to finally arrive (once TMSC starts shipping 7nm chips).
[not a noob option] SLI is all but obsolete now, and I really doubt a 1060 will help much. There is the crazy option of trying to flash your 1060 (hope it has 6G) to think it is a Quadro p2000, but I suspect it will only lose a Gig of RAM (and a few cudacores) and not help you. Also If your system "sees" two Quardro cards in the box, Solidworks is likely to try to use them, and you really don't want to be doing pro work on a flashed card (I have no idea if you can still flash cards, but it was a thing at one point).
No. Always expect AMD to be late. I'd expect them out in November of next year, not this one (I'm pretty sure they are saying 2nd half of 2019 anyway, certainly not in 3 months).
Then Ryzen is definitely for you (Intel really can't compete with AMD until overclocked, after that there are measurable games in benchmarks, if not real use).
The 2700x is $100 for 2 more cores and 4 more threads, none of which is likely going to be used in any game soon.
On the other hand, for video work and any future game that is built for it, you get up to 33% more power.
On the gripping hand, zen2 is supposed to be compatible with current motherboards, so you can probably leapfrog the 2700x with a (then) old zen2 by the time anybody manages to write a game that improves when using 8 cores. I'd go with the 2600x.
Also I'd recommend looking at the brand new B450 motherboards. The Asus prime B450 has nearly all the features (save crossfire/SLI which is obsolete) as your Asus prime H470, but costs ~$80 instead of $150.
Exactly. Claiming a particular CPU "bottlenecks" a GPU simply assume that the CPU does little else than make driver calls to show polygons and the GPU does little but display polygons. Hopefully, your CPU is busy simulating physics, doing AI for any NPC/bots, and basically running the game while the GPU is more concerned with drawing the pixels. Actually I doubt many people who use the term get this, but are just saying what they've heard on the internet.
It shouldn't be a surprise that AMD is behind the DX12/Vulcan effort to cut down all those "call driver/draw polygon" routine (and it was originally Mantle for AMD only), and this should help cut down on the bottlenecks (assuming you can enable Vulcan, it has to be developed in the game and nvidia drivers still aren't too interested in it).
First, look up incremental backups. These only update any new data and use the old data from the original backup. They should save a lot of time and drivespace (presumably incremental weekly, full monthy). I can't tell you how much space this will take up, obviously somewhat less than a full backup, but depends on your data turnover (but I can't believe you are replacing terabytes over the course of a month).
Obviously you want at least a 4TB drive, which is a good place to start (3TB and 4TB drives tend to have the lowest cost/GB). Sorting the drives by cost/GB and only looking at drives >=4TB we get a Hitachi 4T
It seems to work well in backblaze (ones with a trailing 630 don't, ones with 640 should be ok). Note that while I normally like backblaze data, I'd recommend using backups in some sort of removable (although ideally using SATA or eSATA, the wrong USB might take forever), which has a completely different way to kill drives (turning drives on and off instead of simply running them 24/7 like backblaze does).
https://www.backblaze.com/blog/hard-drive-stats-for-q2-2018/ [backblaze data. Might be more useful if you were building a NAS, but it is all the data we have].
That's really all there is to it (choosing the drive anyway). Put your requirements into pcpartpicker, sort by price, eliminate any drive/brands you might be prejudiced against (plenty of people hate Seagates) and sort by price (since there are only a few companies that make drives and all the internals tend to be identical for volume efficiencies, price really doesn't effect reliability at all). You can usually ignore drive speed and any performance issues, it will take all night regardless of the drive (maybe not all night, but you don't want to stand around while the thing copies 2TB). Note that if you leave the drive in the computer, a 5900rpm will be more quiet, cooler, and draw less power.
You can simply add a secondary and go. No issues, just start using it as D: drive (I don't think you even have to tell the bios anymore).
Any RAIDing depends on your motherboard and/or OS. Expect to have to backup the drive, setup the RAID, and then load everything back on (see this thread https://pcpartpicker.com/forums/topic/288200-best-free-clone-software-to-use for saving the drive). Remember if one drive bricks, you basically lose all your data. I'd also have to wonder if the OS overhead wouldn't wipe out the speed gains of RAID (assuming SSD, I think it almost always helped with HDDs).
If your new drive is faster/larger than the old one, I'd look into the software on that thread to simply copy everything over and then enlarge the drive (should be a windows option, although I last used it in Vista) to fit the new, improved space. Then wipe the old drive and use it as "secondary" space.
https://clonezilla.org/ is another choice, although I suspect that its additional features (which might be critical in the non-windows world) pale compared to being able to browse the clone in explorer. Note that it does incremental backups, something you have to pay for in reflectfree.
The price drop should come before the release of a next generation to clean out inventory and avoid having to sell for even less. The used market should obviously see a drop in price, but that is a separate consideration and will probably reflect current market value even closer than new boards (who are trying to sell boards at prices close to the crypto bubble).
I still can't see any reason for nvidia to design a new chip before TSMC has their 7nm process ready (sometime 2019). If nvidia released a chip before that (presumably a 12-14nm Volta based chip), their investors would be screaming about not releasing said thing in the last 2 years during the crypto bubble (and thus making bank).
Don't be surprised if nvidia sends rebranded 1080 boards (possibly with a few more cores enabled) as "1180" boards to all the techsites who signed their NDA with explicit wording that "rebrand" and similar terms are under NDA and they must pretend they are completely new boards (this also allows nvidia to "reboot" the MSRP). Followed by a "2000" series next year that will be all new chips and a real performance boost.
[TL:DR But yes, a G4560 will remain relevant years from now, but the extra costs that you get once you build a system out of it (mainly the DDR4, it looks like the internal GPU will be enough for you) it doesn't really make sense any more.]
Keep in Win 10: If Microsoft lets you. Then again, they plan on morphing windows into whatever they need and keep calling it "Windows 10".
"Just has to be useable": Nearly all programs are written to use only one thread. Don't expect an i9 beast to run any faster unless the programmer really needed more performance and was willing to sweat it out to get use the extra cores/threads.
Hardware degredation, cache files. Hardware won't degrade unless you overclock it/overheat it. Windows used to require reinstalling, but I haven't heard of that for a decade and that would be the only thing that effects cache files. A small SSD might have issues, but even a 120G drive should be enough and have TRIM for many years of use.
let's assume any other potential hardware won't be a bottleneck:
SORRY, NO CAN DO (memory is both a price and power bottleneck):
From the looks of it, you are going for a minimal cost, working computer. The catch with the G4560 is that the memory will cost more than the CPU itself. The i3-240 is almost the same as the G4560 (speed, cores, threads) and has AVX for considerably more powerful video decoding. The real reason you would want such a thing is for DDR3 (you leave some performance on the table, but the cost benefits make more sense at that level). See the following thread for somebody planning on using an i5 for this very reason:
[of course if you can get an i5 for that price, I'd recommend it over the i3]
Note that if you are doing an absolute budget build, this gives the option of using 4G of DDR3, which will probably leave you with a function web browser (not an option with DDR4). If/when this becomes too much of a problem a $40 16G Intel Optane (use the drive as a paging drive, it won't work with the caching software) should work embarrassingly well (and should keep working even when 8G DRAM isn't enough, and I can't imagine 16G on a G4560).
[any other potential hardware isn't as tied to the CPU as memory and can be considered separately. I'm happy to ignore them]
If web browsing and video watching really is all this thing is supposed to do, I'd also strongly recommend looking at chromebooks and similar out of the box options (perhaps a strong raspberry pi competitor would also work, especially if you wanted an external monitor and keyboard).
If you feel stuck with DDR4 and are remotely interested in video/gaming speed, I'd recommend looking at the 2000G. Unfortunately it does costs more and will magnify your ram issues.
a few notes:
AMD B320 is not overclockable. It looks like it will be hard to go wrong with a B450 motherboard (there might be a "just released" tax right now, I haven't dug into the prices).
Memory: DDR3 is used for processors a few generations back. AMD boards that use DDR3 are painfully obsolete and old Intel boards that use it may still be a better buy than modern pentiums, but are definitely budget builds.
HDDs: Note that (sorting by price), the first TB will cost you $40, the next $10, the third $20, and the fourth $25 or more (and it typically gets much worse after that, but there are a few that make sense even compared to 3TB and 4TB drives). In other words it doesn't make much sense to buy a 1TB instead of a 2TB or 3TB, and a 4TB isn't a bad deal (just that in general you are paying the same amount for each TB in a 4 as a 3).
If you don't fill up your SSD, you don't need one. If you tend to download a lot of stuff you do (and should go for the 3TB or more). If you don't want the hassle of moving things from your SSD onto the HDD (becomes it keeps getting cluttered), the AMD + B450 and X470 motherboards include a nifty feature called StoreMI that handles this for you (you can buy it for intel as well for $40) and gives the capacity of a HDD and usually the speed of a SSD (provided you have both in your PC).
SATA SSD: these involve connecting two cables (one data from the motherboard, one power from the power supply). I'd recommend starting with one of these and leaving the M.2 connector (assuming it uses PCI-e) until a PCI-e SSD makes sense. Other people hate cables and will go straight to the M.2.
M.2 SSD They come in two varieties: SATA and PCI-e. PCI-e cost twice as much and are somewhat faster, although I've never heard of anybody being able to tell except for booting and the odd level load. I'd recommend leaving the slot free in hopes of a "PCI-e tax" free drives being available in the future (it should cost the same to make each, but the M.2 market is smaller now).
You need a 8G and 4G pairs, I doubt they are sold together. It works fairly well, and is a comfortable spot between the bare minimum of 8G and the overkill of 16G. LGA1155 isn't so old that you had 3 channels of DRAM.
I can't really recommend it for anyone who doesn't have 4G already on the motherboard, or a much older i5 that uses 3 channels of DRAM. Buying 4 sticks of RAM in at least 2 separate orders doesn't make sense.
Sounds like a quick and dirty check would be to declock your CPU by 10% and see if you get a corresponding ~10% cut in "GPU" benchmarks (you might need to cut memory clock as well, that could also be a limitation). It seems odd that such an old game can still be CPU limited, but they might have been updating the bots. But CPU limitation makes so much sense that the first test I would do is to cut the CPU and see if it is CPU limited.
There is no point paying for empty storage. By the time you fill up a 500GB SSD it might be just as cheap to buy a second 500G drive than buying rotating storage.
An 500G 860evo runs 95 quid, a 3TB Toshiba drive+250G 500M crucial runs 110 quid. The samsung answer is cheaper and obviously better for storage between 250GB and 350GB, all the way up to 500GB. If you find yourself filling available space, go with rotating drives everyday. If you don't, adding such a drive is more expensive and slower (and a real hassle if you don't have something like AMD's mistore).
I suspect that (new) hardware coming out in 2019 won't be any cheaper than 2018. Hopefully significantly better, but I doubt anything will be cheaper. You might get some great deals on otherwise powerful used pre-2019 hardware.
Since the OP specifically mentioned the 8700K, I'll assume that overclocking is a given. If not, the 2700x will match the 8700k in gaming. In practice, it will match the overclocked 8700k in gaming as well, unless you are gaming at low resolution and ultra-high framerates.
Overclocking the 2700x is a little more iffy, but might improve its photoediting power even beyond its normal better-than-the-8700K (don't count on improving gaming).
The other question is what monitor? I'm guessing a 1920x1080 but if was 720p I'd expect a 1050ti would do just as well as a 6G 1060 (or even a 1080ti for that matter).
With a 1920x1080, the "midrange 1060s" come into play.
Depends how serious US, EU, and Chinese regulators are in reigning in price fixing between Samsung and Micron (is Hynix even a factor anymore?). If they are convinced to compete, then expect lower prices. If they are allowed to continue to carve up the market and fix prices, nothing will go down.
Next year Intel should release some "3dxpoint memory" for servers (and expect loads of Intel market segmentation to keep this from being much of a deal), so at least demand might go down as some memory is moved to optane. Micron might take another year to release their stuff, and I suspect neither Intel nor Micron are all that keen on dropping the cost of memory in your computer.
I'm seriously confused by this problem. HDDs are going to be slow, and "fast" HDDs are going to be ~50% faster than WD greens, but still slow. Move the games to SSD, keep the TV shows and movies on the HDD (you aren't going to notice the ~10ms delay to start, and after that there is no difference.
If you have problem fitting all your games and other "non-streaming" stuff on the SSD, I'd look for a cheap SSD first (not so cheap that it doesn't have DRAM, but since you'd boot off the EVO and keep your OS and critical stuff there it doesn't have to max all the benchmarks. Something like an inland pro should work wonders).
For $20 you can buy that feature the 470 motherboards have that combines a SSD+HDD into one large cached hard drive, that sounds like a better fix than replacing your drives (and will certainly be faster than 7200rpm drives). You might need to clear either HDD or SDD, so it might make sense to buy a cheap SSD along with it like an inland pro.
(note that they have non-AMD specific software that does the same thing for 512G SSDs for $40: if you are buying a SSD to go with this, that should make a lot more sense).
You have a laptop hard drive lying around? I can't imagine it was in the old tower, and a 3.5" doesn't appear likely to fit in the new case. The case seems like overkill (with room and power for a GPU), but the looks could well justify for this use.
[original rant before I noticed you needed an ITX board. I'm pretty sure you can't swap the CPU between boards without removing the heatsink, but you might give it a try.]
That's a socket FM2 chip, so it uses DDR3 and the motherboard won't socket a Ryzen. I'd simply leave the CPU/ALU in the motherboard and replace them both at once and never separate them. You'll have to clean and replace the thermal paste if you separate the heatsink from the CPU. Unless of course your heatsink isn't worth transferring (I'm in this boat. I have a AIO watercooled bulldozer... I swear each component made sense at the time...).
In that time we should see:
Ryzen 3 (redesigned zen2, not just zen with a slightly better chip making process released this year as zen+)
presumably Intel on the next process (not expecting much improvement, unless you want low power for laptops)
new GPUs from nvidia and AMD (possibly two generations if you believe the rumors of an imminent nvidia release)
presumably a switch from SATA being "the standard" to PCIe, and PCIe NVMe drives sell for closer and closer to SATA
deeper levels of 3d flash? (i.e. you won't bother with 240GB, everything will start at 480GB)
presumably in 2020 Micron will produce their own 3dxpoint chips (possibly sometime in 2019, the wording isn't clear). No idea if they will get the bugs out, but if they do it will certainly change how you buy SSDs.
And you are asking us now what to buy? SSDs have been changing rapidly for the last 10 years. Expect things to really change next year (we hopefully will get a new node/chipmaking improvement, something that happened every other year from 1970-2010, but now is painfully rare) so don't make any plans in stone this year if you plan to wait that long.
Are you building a ryzen? They must be the only computers left that don't have igpus. Sorting the video card list by price hits the 710. That has HDMI, DVI (the picture seems to show VGA, but you might pay a few bucks more to be sure if you need VGA). Anything remotely current should be able to display up to at least 1080p and probably a lot higher. Just expect a slideshow if you try to game on it (moving the mouse shouldn't be an issue).
Note that case fans are on the shelves of microcenter and are pretty cheap. If you have amazon prime shipping won't be a problem, otherwise I expect it doesn't make sense to ship fans. I think you can get a decent case shipped for that price range , so consider that (and adding fans if possible). Personally, I'd just replace the case fans and add any more where there are places for one.
If the three drives are all 3.5", that will be an issue for the case (that is pretty much the point of a N500, 3.5" bays, 2 5.25" bays, at least 2 2.5" bays: I could presumably print out a bracket for 2 more, the space is there but I don't remember a second bracket). Although if they are all spinning, I'd assume you would have recommended a SSD by now.
"but other factors such as cache size, number of cores and threads, etc. "
Except that things like cache size, number of cores and threads" are almost the same between 2600k and 7700k i7s (cache size changes a little, I think the change from DDR3 to DDR4 is more significant). Intel chips have barely changed internally since 2011. Same number of decoders, execution units, etc. with only minor tweaks in things like reorder buffer sizes and whatnot. You can't use clockspeed to compare Intel to AMD (or between Phenom, Bulldozer, and Ryzen for that matter), but you can get a very good comparison by comparing post-2011 (except Atom-type CPUs) Intel to Intel. Note that there is a huge difference between this and the previous generation (core) and a similar jump from the pentium4 to the "core", so don't try to use this for pre-2011 chips.
One other big change between 2xxx and 7xxx is the AVX instructions. Back in Haswell (4xxx) it changed from 128 bits to 256 bits wide (AMD is sticking with 128 bits), this allows lower threaded Intel processors to keep up with higher threaded AMD chips on sufficiently parallel tasks (which is where more threads come in handy anyway) like cinebench. Don't expect to see AVX used anywhere but video jobs or math intensive heavy engineering work. And while there is a 512 bit wide variant, that is Xeon only and much less useful (it practically cuts you clockrate in half every time you touch it, wiping out most of your gains).
Intel really hit it out of the park in 2011 and hasn't done much to improve a chip since (DDR4 and the 6 cores of 8xxx are the biggest changes). Some of this has been lack of a reason to (AMD couldn't compete until Ryzen and IBM barely has a grasp on the server market), but some of it is likely that they are running up against the limits of what you can do with silicon and the x86 platform.
[is AMD producing 1st gen ryzens?]
As far as I know, the EPYC server CPU didn't get the upgrade that Ryzen received. Therefore if AMD wants to sell EPYCs, they have to order zen from Global Foundries. If they feel they can sell them better as Ryzens, they can presumably package them as Ryzens and sell them as 1700/1700x (although I'd assume they'd really want to sell the 8 core jobs as EPYC).
So the answer is "probably no, but it isn't impossible". But whenever they want more ryzens, they will order the 2nd generation from Global Foundries.
I wouldn't worry about it, and I'd expect it to be significantly faster than most console CPUs (which are even wimpier AMD cores). While GPU seems all-important on a gaming GPU, don't forget the RAM (and be glad it will take the cheaper DDR3) or the SSD (cheap these days).
I'd expect the Phenon to keep up with the 1060 or RX580, and certainly with the 1050ti. Note that all of these are limited to a 1920x1080 display and ~60Hz, and it probably wouldn't hurt to go down a notch if such are available. On the other hand, I know you used to be able to buy old 19" monitors for a song at the local college's surplus store (there may have been an issue with the fire marshal, I'm not sure about now), and buying 3 cheap monitors might go a long way (especially for non-gaming). I'd certainly recommend not bothering with freesync (and gsync is right out at that budget) as the monitor for this rig probably won't be used in the next (however long that will take).
One thing about the RX580/RX570 is the option to use Mantle/Vulcan/DirectX12 if available in the game. This should take a lot of stress of the CPU, and probably the reason AMD created it (Ryzen was a long way away).
PS: I'm running a 3.5GHz AMD8320 bulldozer (weaker than your Phenom) with a 560ti GPU and haven't felt the need to overclock my CPU (although it might help for Kerbal Space Program), although I mostly play ancient games. Once the CPU gives the GPU its instructions, there isn't much less for it to do. I'm certain that a 1050ti won't be waiting on a Phenom CPU, unless playing a game is basically a physics simulation (like Kerbal Space Program, or perhaps some driving/racing simulators. Note that most of these do fine in 30fps unless you are driving a formula 1 or similar "twitchy" vehicle)
I have no idea. It all comes down to drivers. If you are doing fancy things with your drives (RAID, caching the drives on SSD or optane) you will certainly need to have the windows box up and running. I'd like to have an image of the boot drive as well (something linux does well), but that can be merely booted off a simple Linux boot (possibly USB/dvd based) and backed up on windows as a file.
If you find drivers for windows and have never used linux, I'd expect you to use windows (the last thing you want is to make a mistake on your backups). The drivers will be key (hopefully there won't be a significant cost for windows drivers).
There's some speculation lingering about nvidia's "canceled" hot chips talk (although hot chips would make more sense mainly for a "long time from now" release). The rumors don't really make any sense since we know TSMC (who makes nvidia's and AMD's chips) has been able to make 12nm for at least 2 years (and have been making volta that long) and that nvidia has had the volta design working for 2 years, plenty long enough to have shipped 12nm gaming chips. In 2019 7nm should be available, which should produce much better gaming GPUs and AMD has promised us some, [the 2019 TSMC 7nm chips, don't expect to afford the 2018 GloFo 7nm chips or be all that impressed with the performance]. I've come up with few reasonable possibilities for these rumors:
A. Pure speculation/wishful thinking. People can afford a GPU now, so they want a new one now. And places like wccftech are happy to tell them one is coming to keep the hits coming.
B. Nvidia plans a "launch" centered around renamed cards. They've done this plenty of times in the past, and now have a nifty NDA so that handpicked "tech sites" that will be given review samples won't be allowed to tell you it's a rename. How many times has nvidia renamed a launch? Also this gives them a chance to "reboot" the MSRP.
C. It is a real-deal, and an ongoing disaster since they launched Volta 2 years ago (the 1180 should have been here at least 1.5 years ago) and completely missed the cryptomining craze (and the ability to simply scale up the price with the performance). Investors won't be happy at all.
D. It is a real deal, and nvidia had to scramble a couple of years back to bring it back from the dead because they don't expect TSMC to produce 7nm anytime on schedule (everyone seems to agree that 7nm is in production, presumably already building Apple chips or other guys with deep pockets and easier to manufacture chips). If nividia ships, I'll be worrying about Zen2 and Polaris (and be watching even more carefully for signs of 7nm Vega, if only to confirm that GloFo can produce Zen2 in 2019).
E. It is the real deal and 7nm (all rumors say 12nm) almost certainly the least likely possibility, this would be a major coup if they pull it off (although I'd expect they'd have to pay an ARM and a leg to bump Apple's place in line). If this is the rumor, expect the launch to continue to slide and tons of availability issues, but when it finally launches it will be a powerhouse.
If you don't like having free-sync off, you certainly don't want to pay for a new G-sync monitor.
As far as an aftermarket cooler, if you have access to a 3d-printer (the local library had one, but they moved and I haven't seen it in the new building. I'm sure one of the other libraries must have one) try printing out the files in this thread:
If you get something workable, I'd bet that a compatible AIO watersystem would be far more effective than an off the shelf aftermarket cooler.
At first I assumed that you would be far better off using Linux (either dual booting or building a separate NAS/tape backup if doing anything tricky with your filesystems) but it appears that a few low/no cost options exist in windows (Dell wants $400/year for their software). Most google hits confirmed this view, but at least one comment mentioned the following:
https://www.z-datdump.de/en/index.html looks the most straightforward for windows, but will charge you if you want to use windows server (presumably it will allow personal use >30 days on win7/8/10).
http://www.amanda.org/ claims to work "on anything with drivers". Of course those are POSIX drivers and windows POSIX compatibility existed for government checkboxes, so I'm less certain about windows compatibility (they do have a windows edition).
http://blog.bacula.org/ open source that appears to have a lot of windows-specific features (appears to backup open files, although it doesn't claim of doing such since vista. Fortunately win7 is basically "fixed vista").
If you are using Linux I'd go with that, and wouldn't be at all surprised if you wind up setting up a partition/cheap-old-system thanks to driver availability. But I'm guessing that if you find the drivers, one of those 3 will work.
How small can you go? Intel 3dxpoint appears to be relatively inexpensive and blazingly fast for a tiny drive (you might need to leave bits of the OS elsewhere). Otherwise I suspect you would have the issue of a drive that cost much more than a 500G SATA while being slower.
I'd recommend getting any type of benchmark before buying. I wouldn't count on the either saturating a SATA bus, let alone the smaller (although it is cheaper than most 512G drives).
I'm using a coolermaster N400 that is has two 5.25" drives (and is full of both DVD and bluray writers). They have a N500 that can includes 3 if you need an optical drive (I suspect anyone who buys a tape drive has too many optical discs lying around that need to be saved/read).
I haven't had a tape drive since CD-Rs became affordable, but back when those drives cost >$500, I bought a 800MB tape drive and kept all kinds of data on it (including overflow from a 3GB drive, not just backups). The unique thing about that tape drive is that it is the only piece of hardware I've had that I never had a windows/* driver for (I had to use DOS [meaning I often had to use PKZIP to save long filenames], and Linux has since had drivers). If drivers become an issue, I'd strongly recommend looking to Linux (especially for "server gear"). [Note that for anybody who had to deal with 8 bit computer and their audio cassettes, this was a great leap of faith].
I've been wondering about building an array of discs out of "refurbished/used" drives, or otherwise available drives. These run at roughly $20/TB (your tapes run at $15/TB and seem to be the going rate). The obvious catch is that using such an array (even in RAID) is a great reminder to keep your backups current. A second array (with at least as much RAID redundancy) would be needed, and tradition mandates 2 copies (one offsite). Two copies drives you into tape territory, as does even moderately sized array assuming you are interested in keeping a running backup with the ability "to go back in time".
I have to question your bit on tape speed. My reading the specs seemed to make me think these things were pretty slow (although I was looking at LTO-5). Can they really keep up with a whole drive array at once?
If you aren't overclocking than Intel's "gaming" advantage all but disappears. Also Intel's "gaming" advantage typically only exists when insisting on running games at 144Hz. Even trying to keep things just above 60Hz means only the GPU typically matters (but don't buy a bulldozer).
Two of Intel's best gamer CPUs are the 8600 and 8600K. They might be "3.1GHz" chips, but will hit 4.3GHz in low-thread gaming. The catch is that the AMD 2600X matches this price, also runs 4.3GHz in low-thread situations (i.e. gaming) and easily runs faster in high-thread situations (presumably heavy work and maybe future gaming). The 8400 and 2600 do pretty much the same thing at a price point $40 lower (and .3GHz lower peak performance).
The i3 looks interesting, but Intel doesn't let the clock turbo higher (or at least doesn't list a turbo frequency), so what you see is what you get. The Intel 3.6GHz i3 8400 will likely evenly match the 3.5GHz [3.7GHz boost] 1300x (both are 4 core 4 thread), at least until you try overclocking. The i3 is locked but I'd expect the AMD to go to 4.2GHz... On the other side, Intel will sell you an unlocked i3 going 4GHz to start with (and presumably an excellent gamer), but it will cost $170 or roughly what the 6 core 12 thread unlocked 2600 (which will boost to 3.9GHz, so will likely have an advantage even before you try overclocking each).
The only advantage the i7 has over the i5 is that is has more threads. This basically is competing with AMD at its own game, and never works in Intel's favor (overclocking typically changes things for $350, giving you the best of both worlds if you get a ~5GHz i7 8600k).
AMD has two aces up its sleeve. One is the heatsinks. Especially if you aren't overclocking I'd expect AMD's heatsinks to be enough to get boost frequencies while gaming (expect to replace it when overclocking). I wouldn't trust Intel's heatsinks even for that. The other is that motherboards are historically cheaper for AMD, and often easier to get them bundled. WARNING: make sure the motherboard is compatible, or you will have to wait for AMD to ship you a free ALU. Microcenter had an unbelievable deal on a 2600X + Asus ROG motherboard (will allow overclocking) (+ AMD Wraith Spire Heatsink) for $284. Maybe they would update the bios at the service desk if you bought it at the store...
I think when you look at CPU+motherboard+heatsink, you will find that AMD will have identical current gaming experience for a lower price, and easily surpass Intel at anything that uses more threads (which should include future games, especially if consoles try to maintain performance by spamming cores).
The FX8350 also shared decode and dispatch. I'm guessing that shared dispatch was a big problem on the chip. Shared decode is pretty weird in that decoding multiple x86 instructions is relatively hard. No idea why they would group them up like that, unless each had a dedicated decoder and shared the simple ones.
Oddly enough, the Pentium4 appears to have a smaller implementation of the same concept: in die shots you can clearly see a second L1-ALU (this is nothing like a separate core, but the extra L1 is likely to help).
Link to overly long deep dive (especially for a failed chip, but this was written long before it): https://www.realworldtech.com/bulldozer/
(or you could just wimp out and look at the wiki).
Occulus has 2160x1200 resolution and wants a consistent 90Hz framerate. I can't make head or tail out of the limited Skyrim VR benchmarks on the web (and I didn't get that many hits to begin with), but I'm fairly sure that nothing is going to work with supersampling enabled. Without it, it looks like the 1080 (and even the 1070ti) get the job done roughly as well as the 1080ti, but I doubt there is still room for supersampling on the ti. But I really don't trust those webpages, nor my ability to judge what they are claiming about "high settings" and "ms delay". Your best bet would be to buy the VR headset first, figure out what settings you wanted (and what framerates the 1060 provided) and scale them up to see what video card you need (try not to get too sick when the framerates drop to 45fps or lower with the 1060). Note that I'm not suggesting that waiting a year or so would make you sick, only trying to use things like supersampling and other unrealistic (or even 1080 and above only) quality settings
I'm expecting the next generation of video cards at least next year (produced on TSMC's 7nm process). If nvidia releases a 12nm card this year, something went really, really wrong last year. If they produce a 7nm card I'll really be impressed (mostly by TSMC). Note that the rumors are all about 12nm cards.
First: check your warranty. You don't want to touch it if EVGA can fix it.
Anything that lets out the magic smoke will be hard to repair. According to the markings on the PCB, there should be 2-4inductors and at least one capacitor in that damaged area next to the damaged power resistor. If the pads (PCB copper traces) are damaged, you have little hope of repairing it (and I don't know how you will find the values of the components).
That is pretty clearly a power supply that you fried. If a GTX770 runs at a nominal 1.2V, I have to wonder if rigging a 1.2V supply into its output (yank out the fried components as well as the capacitors and transistors) would revive the thing. Only do this as a last ditch effort, and do it with the unit outside the case as a test in case of shorts of additional burning (you don't want to damage your working computer on an iffy fix for your GPU.
To be blunt, I'd be rather surprised if you can find all the values of the fried components and solder them back. Also 402 components just love to jump when the soldering iron touches them, so expect soldering to take awhile if one is fried (603 isn't that bad. And since this appears to be a power supply, most of the components will be fairly hefty). No idea if the CPU took damage during overheating as well. Most of these ideas come from repairing >$10,000 boards used by the Navy, it is extremely unlikely that it is worth anyone's time capable of finding the component values in a ~$100 board. On the other hand if you have an electronics lab handy, supplying 1.2VDC to the fried parts' output (should be one of the sides of L13) might be worth it (don't be surprised if it wants 20-30 amps). If that revives it you can figure out which way to go from there, obviously this requires a fairly good idea of what you are doing and how the card works. And if you don't have an electronics lab handy, how are you going to find the component values anyway?
Wccftech has a history of cobbling up some pretty good guesses and presenting such as fact. The reality all comes down to Moore's law's slow march from law to suggestion to historical footnote.
[TL;DR: If you think nvidia is launching a new board now, ask yourself why in the world they didn't launch it last year when the mining craze was in full swing? 30% more hashes would sell for 30% more bucks and don't try to pretend nvidia and AMD kept chip prices high to reap the gains]
Micron is shipping DDR6 now. The main issue is that if nvidia makes a board now, at best it will be on TSMC's 12nm process. AMD plans to build a 10nm Vega board this summer, but that is a "run into every problem GloFo can hit" level and is a professional card only, you won't like the price they ask.
Sometime next year, AMD plans to build Navi on TMSC's 7nm, this should be a consumer board. Nvidia can probably easily shove themselves in front of AMD (but behind Apple), but won't have too much of a time advantage getting a 7nm card out.
So the question you have to ask yourself is this: did nvidia commit a few years back to designing a 12nm chip and commit to production about a year ago? It might make a ton of sense to produce a 12nm chip now and a 7nm chip later (to scoop up all the gamers waiting for the mining craze to end), but such a chip had to have been designed and started production with miners in mind. Nvidia simply waited to go from 28nm to 16nm (and released the 6xx, 7xx, 8xx, 9xx all on 28nm) until finally shipping the 10xx on 16nm (1050 and below on 14nm).
Right now I would say it makes a ton of sense to release the 1180 this summer (on 12nm) and a 2080 next year on 7nm, but that requires an extremely specific crystal ball on nvidia's part. One reason they might have pulled the trigger on 12nm was watching Intel's repeated failures on 10*nm and feeling that if mighty Intel had trouble, to not trust TSMC to be quite on time.
Note that Volta is already shipping on 12nm, so possibly cutting the thing down and shipping that isn't quite the big deal I am making it out to be. But why in the world would nvidia simply sit on such a design for over a year while the mining craze was in full swing?
a 8700k draws 95W across 144mm2 for .66W/mm
a 1080 draws 173W (the VRAM and inefficiency actually gets a significant chunk) at 314mm2 for .55W/mm
Note that a 7950XE may have turned off cores and is a bit harder to judge.
The higher the temperature and larger the chip, the more heat flows into the heatsink and out into the case. Temperature is the balance between the Watts produced, and how much help the heatsink needs to remove all the Watts. One obvious consideration is that video cards typically have a heat sink that fits on the video card but CPUs have a much larger and more cooling friendly shape. If both are watercooled, this doesn't matter so much.
Also the 8700K is rated up to 100C (although don't be too sure that the thermal diode is in the hottest part of the chip). A further guess might be that it is easy to max out your GPU (pretty much play any modern game with V-sync off. Or fire up that hair-based benchmark if you really want to hit thermal limits) while most "tech enthusiast" sites are unlikely to really overwork the CPU (try running largish [but still fitting in L3] double-precision FFTW runs to see what happens (note that AVX512 will throttle those that have it)).
The GPU temperature is probably whatever it is set to heat throttle at. You might convince the CPU to heat throttle as well, but it might be harder.
Powerwise looks similar (except nvidia produced three different variants of "560ti" and three different "1060" cards, so it can still vary a bit).
I'm pretty sure that either will take up a doublewidth slot: physical issues shouldn't matter (nor will they really care as long as they get about 8 lanes of PCIe, and they will almost certainly have 16 or more available).
Both appear to be designed for a 1920x1080 monitor. The 1060 is simply capable of pushing more modern games at that resolution with presumably all the bells and whistles. You might want more if you have a higher resolution monitor.
I doubt they build low end cards assuming that they would sell in the mining bubble. Either computers in general aren't selling well, or perhaps AMD sold a ton more Raven Ridge (2200G & 2400G) chips than nvidia expected.
No word on Intel chips being returned. If these cards weren't for mining, I'd expect Intel has trouble as well.