I has a new gpu!

I HAS A NEW GPU!

Well kinda, I have a choice of three:

GeForce 8800 GT
GeForce 8800 GTS
Geforce 9600 GT


Yeah I know they aren’t the best, but come on, I found these in a wardrobe (seriously).

I need help to choose which one, (I’m currently using Radeon 5770), so could you guys help?

I would go for the 8800 GT. It has a better graphics performance, and a better texture fill rate.

EDIT: Keep your Radeon 5770, it’s better than all of the three. But the 8800 GT is better than the 8800 GTS and the 9600 GT. Keep the Radeon 5770.

EDIT2: Keep your Radeon 5770, it’s better than all of the three. But the 8800 GT is better than the 8800 GTS and the 9600 GT. Keep the Radeon 5770.

I don’t know… Gran Turismo 2 was pretty good, but 3 was kind of a letdown… Not sure about 8800 or 9600 (did they really make that many sequels?)

get Intel Family 4600 HD, I’m telling you man, CS:GO in 20 FPS

PS Mustard Race Dud :laughing:

:laughing:

no

it’s just truth

:cry:

see if the consoles can ketchup

Consoles are cheaper, but you have to pay for online on consoles. (Which is a cocky move of Microsoft and Sony). And you have more customization for PC, and better Anti-Aliasing, and better Texture Filtering.

(Somewhat off-topic)
While I’m here daydreaming having 4 SLIed 980ti STRIX in my hands, running games on max graphics + ENB + Reshade.
Sadly, that’ll never happen. :cry: :crying_cat_face: :cry:

I know the feels bro. :cry:

playstation online is free, but xbox online isn’t

With the PS3, yes it is.
With the PS4, no it isn’t.

Steam is free c:

kind of

u still need money to do anything
but yeah,k

goddarn it Sony
I had my hopes up

soz i hasnt replied, but i found out like literally just now that my [quote=“Cazran, post:1, topic:19668”]
Radeon 5770
[/quote]
is actually a

dont matter, im using GTS rn, better performance according to www.gpuboss.com.

They are not the same, they have different specs and the Radeon 5770 wins. But they can have the same FPS in games. Don’t trust gpuboss, Nvidia pays them well.

1 Like

OK lets look

GT 8800
The 8800 GT, codenamed G92, was released on 29 October 2007. The card is the first to transition to 65 nm process, and supports PCI-Express 2.0.[12] It has a single-slot cooler as opposed to the double slot cooler on the 8800 GTS and GTX, and uses less power than GTS and GTX due to its 65 nm process. While its core processing power is comparable to that of the GTX, the 256-bit memory interface and the 512 MB of GDDR3 memory often hinders its performance at very high resolutions and graphics settings. The 8800 GT, unlike other 8800 cards, is equipped with the PureVideo HD VP2 engine for GPU assisted decoding of the H.264 and VC-1 codecs. Performance benchmarks at stock speeds place it above the 8800 GTS (640 MB and 320 MB versions) and slightly below the 8800 GTX. A 256 MB version of the 8800 GT with lower stock memory speeds (1.4 GHz as opposed to 1.8 GHz) but the same core is also available. Performance benchmarks have shown that the 256 MB version of the 8800 GT has a considerable performance disadvantage when compared to its 512 MB counterpart, especially in newer games such as Crysis. Some manufacturers also make models with 1 GB of memory; and with large resolutions and big textures one can perceive a performance difference in the benchmarks. These models are more likely to take up to 2 slots of the computer.
The release of this card presents an odd dynamic to the graphics processing industry. At an NVIDIA projected initial street price of around $200, this card outperforms the ATI flagship HD2900XT in most situations, and even NVIDIA’s own 8800 GTS 640 MB (previously priced at an MSRP of $400). The card, only marginally slower in synthetic and gaming benchmarks than the 8800 GTX, also takes much of the value away from NVIDIA’s own high-end card. This release was shortly followed by the (EVGA) 8800 GTS SSC (the original 8800 GTS re-released with 96+ (112) shader processor units), and ATI’s counter, the HD 3800 series.

Next

GTS 8800
he first releases of the 8800 GTS line, in November 2006, came in 640 MB and 320 MB configurations of GDDR3 RAM and utilized NVIDIA’s G80 GPU.[17] While the 8800 GTX has 128 stream processors and a 384-bit memory bus, these versions of 8800 GTS feature 96 stream processors and a 320-bit bus. With respect to features, however, they are identical because they use the same GPU.[18]
Around the same release date as the 8800 GT, NVIDIA released a new 320 MB version of the 8800 GTS. While still based on the 90 nm G80 core, this version has 7 out of the 8 clusters of 16 stream processors enabled (as opposed to 6 out 8 on the older GTSs), giving it a total of 112 stream processors instead of 96. Most other aspects of the card remain unchanged. However, because the only 2 add-in partners producing this card (BFG and EVGA) decided to overclock it, this version of the 8800 GTS actually ran slightly faster than a stock GTX in most scenarios, especially at higher resolutions, due to the increased clock speeds.[19]
NVIDIA released a new 8800 GTS 512 MB based on the 65 nm G92 GPU on 10 December 2007.[20] This 8800 GTS has 128 stream processors, compared to the 96 processors of the original GTS models. It is equipped with 512 MB GDDR3 on a 256-bit bus. Combined with a 650 MHz core clock and architectural enhancements, this gives the card raw GPU performance exceeding that of 8800 GTX, but it is constrained by the narrower 256-bit memory bus. Its performance can match the 8800 GTX in some situations, and it outperforms the older GTS cards in all situations.

Next

Gt 9600
65 nm G94 GPU.
64 stream processors.
16 raster operation (ROP) units, 32 texture address (TA)/texture filter (TF) units.
20.8 billion texels/s fill rate.
650 MHz core clock, with a 1625 MHz unified shader clock.
1008 MHz memory (2016 MHz datarate), 256-bit interface for 64.5GB/s of bandwidth. (57.6 GB/s for 1800MHz configuration).
512 MB - 2048 MB of GDDR3 or DDR2 memory.
505M transistor count
DirectX 10.0, Shader Model 4.0, OpenGL 3.3, and PCI-Express 2.0.[8]
Supports second-generation PureVideo HD technology with partial VC1 decoding.
Is HDCP compatible, but its implementation depends on the manufacturer.
Supports CUDA and the Quantum Effects physics processing engine.
Almost double the performance of the previous Nvidia mid-range card, the GeForce 8600GTS.

Now last but not least
Radeon 5770

he codename for the 5700 GPU was Juniper and it was exactly half of Cypress. Half the shader engines, half the memory controllers, half the ROPs, half the TMUs, half everything. The 5750 had one shader engine disabled (of 10), so had 720 stream processors, while the 5770 had all ten enabled. Additionally, the 5750 ran at 700 MHz and a lower voltage, while the 5770 used more power, but ran at 850 MHz. Both cards were normally found with 1 GB of GDDR5 memory, but 512 MB variants did exist, performance suffering somewhat.

That was a wall

I doubt that you typed that by yourself @Andree_Gunderson :wink:

tl;dr

thanks bruhs for all da help, and ill take all da things into consideration.

I’m glad