[彻夜难眠●不敢相信] Nvidia, Super Micro Computer all 惨重大跌.... whats happening to AI theme?

focus1974

Greater Supremacy Member
Joined
May 12, 2007
Messages
89,989
Reaction score
31,998
The DeepSeek thing is likely overblown.

There are a lot of LLMs on Ollama with FP8 quantisation and comparable parameters size. I really think this is a China operative.

yes, but the key thing is it is threatening the AI providers (like AMZN, MSFT, GOOG, OpenAI, etc etc) profit margin.
Imagine last time can charge $20 monthly or even $200 monthly for some.
Now , you have to charge $2 monthly to compete with Deepseek.

Profit margin evaporated 90% at the minimal .. means your profit projections on the AI part of the companies all GG.

Now analysts should be busy readjusting the forward earnings and funds are just throwing first buy later.


The scary thing is .. Deepseek has gone viral today. The Deepseek API is GG now.
https://status.deepseek.com/

Have been using it to try out the coding using API and it is performing as good as openai or sonet.
but at a fractino of the cost.

I used so long ... now only used up usd$1.20 for 10 millions tokens(input/output)!!
This would not be possible with OpenAI or Sonet.

7N4G6zF.jpeg
 
Last edited:

Jirachi

Great Supremacy Member
Joined
Jan 17, 2010
Messages
54,226
Reaction score
2,814
yes, but the key thing is it is threatening the AI providers (like AMZN, MSFT, GOOG, OpenAI, etc etc) profit margin.
Imagine last time can charge $20 monthly or even $200 monthly for some.
Now , you have to charge $2 monthly to compete with Deepseek.

Profit margin evaporated 90% at the minimal .. means your profit projections on the AI part of the companies all GG.

Now analysts should be busy readjusting the forward earnings and funds are just throwing first buy later.
Yes. I understand where you are coming from
 

Clarence NMT

Supremacy Member
Joined
Dec 28, 2012
Messages
8,399
Reaction score
2,788
So deepseek ish real? 山寨版 really better than original?
Thisnkind of thing keep on have best, thebthing only is free open source. Is like u charge some amount, ppl charge free, is like spoil market
 

Nevereatrice

Honorary Member
Joined
May 15, 2015
Messages
144,833
Reaction score
25,834
Later market open will have another round of sell off.

Then after lunch time the real sell off will come.

I hope you all have already sold all your stocks when I warn about this crash 1 month ago.
I already off hands nvda after i saw the monthly chart coming off last year. Thought this year would be year of small-mid caps but now market will be pulled down as a whole by heavy weights
 

ThisIsSparta

Supremacy Member
Joined
Jan 26, 2014
Messages
5,677
Reaction score
1,465
bought when they announce earnings together with share split that time
sold when edmw got that thread about nvda to the moon
personally for me, when stupid people start to buy, it is indicator to sell

anyway, time for the know-it-all edmw bears to spam "i told u so" and celebrate
 

AuraKUPO

High Supremacy Member
Joined
Apr 21, 2002
Messages
28,684
Reaction score
16,072
I already off hands nvda after i saw the monthly chart coming off last year. Thought this year would be year of small-mid caps but now market will be pulled down as a whole by heavy weights
That's good bro. No loss is a win already.
 

selemuse

Senior Member
Joined
Dec 7, 2011
Messages
609
Reaction score
439
Talking about scale, it certain can scale with more hardware, develop more sophisticated model in shorter time.

it's still important but it's overvalued since deepseek proven that you can do more with less

You are the one who has no idea what is going on.

US do not want Nvidia's better chips to go to china already and with China making their AI open sourced means many people know they buy Nvidia's lower end chips can do as good as open AI etc US models.

So why people will buy Nvidia's better chips?

Unless US has major breakthrough, so short term demand for Nvidia chip sure go down
Of course, I know what i talking. I work in this industry, for your info.
 

dontwastetime

Banned
Joined
Mar 9, 2015
Messages
14,984
Reaction score
12,141
All i know is this

Stock dropped 12% = 700k loss

1% = 700/12=58k

100% = 5.8m

What did u do man ? Leverage ? But I dun think so.. pls explain
 

joshwong11

Senior Member
Joined
Aug 3, 2018
Messages
1,851
Reaction score
1,321
yes, but the key thing is it is threatening the AI providers (like AMZN, MSFT, GOOG, OpenAI, etc etc) profit margin.
Imagine last time can charge $20 monthly or even $200 monthly for some.
Now , you have to charge $2 monthly to compete with Deepseek.

Profit margin evaporated 90% at the minimal .. means your profit projections on the AI part of the companies all GG.

Now analysts should be busy readjusting the forward earnings and funds are just throwing first buy later.

lmao, wad talking u. deepseek approach is same as openai o1 onwards just that it costs less since they found way to optimise nvidia amd hardware. there is nothing preventing openai n other companies from doing the same n they can do it with such scale and abundance that make them even more profitable. openai still have o3-o4 and gpt5 in the pipeline. anyway openai from o1 onwards also the same minimizing costs making their model more efficient so they can sell their API at profit since there are no challenger at that time.
anyway all these companies u listed will be the main beneficiaries instead since they require less compute/costs to offer AI solutions to existing real customers. all can scale up faster.
 

focus1974

Greater Supremacy Member
Joined
May 12, 2007
Messages
89,989
Reaction score
31,998
Because it uses the synthetic (derived) data from ChatGPT instead of mining its raw data.

yup. Now most LLM shoudl be moving to the next phase of sythentic and time-scaled approach.

The three main phases of LLM training are still generally recognized as:
  1. Pre-training (self-supervised learning)
  2. Instruction fine-tuning (supervised learning)
  3. Reinforcement Learning from Human Feedback (RLHF)
These phases continue to be the foundation of LLM development1. However, research is ongoing to improve and expand upon these methods.Regarding synthetic data and time-scaled approaches:
  1. Synthetic Data: While LLMs can generate synthetic data, this is typically used for fine-tuning other models rather than for training the LLM itself. The primary training data for LLMs still consists of real-world text from various sources4.
  2. Time-scaled Approach: Recent research has explored scaling test-time computation to improve LLM performance. This involves optimizing how the model uses computational resources during inference, rather than changing the training process itself39.
 
Important Forum Advisory Note
This forum is moderated by volunteer moderators who will react only to members' feedback on posts. Moderators are not employees or representatives of HWZ. Forum members and moderators are responsible for their own posts.

Please refer to our Community Guidelines and Standards, Terms of Service and Member T&Cs for more information.
Top