What's new

Unbelievable! The US Never Expected the Replacement to Come So Quickly!

I think US is making same mistake Soviet Union made. Which is choosing a wrong path of AI development. ChatGPT is overestimated. It's just an auxiliary tool for office works. 1,ChatGPT doesn't have innovative capability. It's a follower of existing human knowledges. 2,All data fed to it are from internet. Unfortunately the really valuable data are unavailable in internet. So ChatGPT can do some universal works. But when it comes to something professional or deeper, ChatGPT is not useful at all.

China chooses the path of AI serving industries is the right pass. For example, Huawei's Pangu. It is used for scientific researches.

It is still a useful too for automation of mundane tasks in businesses, which will reduce costs and increase profits. Thats the capitalist goal really.
 
Although this is not directly related to AI but this indicates that Chinese AI is very advanced as AI is increasing part of cybersecurity.



Pentagon Official Says He Resigned Because US Cybersecurity Is No Match for China​

 
I have a Huawei phone, had it 4 years, excellent device. Great battery, still works really fast.

Unfortunately since they dont have access to google playstore, i won't buy another one.
 
Dude, you have put a lot of word in my mouth, since when did I say you debug with Machine Language?

I said you need to know your limit in the prototype stage, you know what the machine response will be. You put different variable in the equation to see what is the result. That's for general programming
Hahahaaha, sure sure. Until now you cannot answer me what is the relevance of machine codes for AI. You told me you need it to check for errors, i am telling you NOBODY checks machine codes for code errors.
Prototype limit, what limit are we talking about here? Computing power? Memory power?

Habahaahahh. What has putting variable into an equation has anything to using machine language for AI. FFS, you are going into incoherent mode again.

On the other hand, for AI or Machine Learning, you need to know how the process of the model work, I mean you claim to know AI right? Explain to me how you know Large Language Model without knowing how to work matrix transformation, which is a part of machine language knowledge
For someone who has never ever programed an AI algorithm, you are lecturing me? I am not a language model expert, i did image AI and GA optimization. Then explain to me how matrix transformation is used in AI? I am really curious. Lolol. I never needed those to code my algorithms, there are multiple strategies to achieve optimization.


The reason why I know you know shit about this is because you think Machine Language is only about 00100101010. It wasn't. it's about add, subtract, move, transpose and transform bit. ie how you get from 1 bit-state (0/1) into another. You are thinking about Machine Language is 0011 is the most rudimentary form of knowledge, they stop teaching that in year 2..
Machine language is ONLY 101001, both opcode and operand are ONLY in 1s and 0s. Are you confusing it for assembly language? I just caught an idiot?!

Screenshot_20231115_194701.jpg



First of all, you didn't code AI either, I don't think you had coded anything, as I said, now I even think you never even went to college.
Okay genius. So explain to me how do you train a GA model?

Second of all, I can school you on Machine Learning, which you need to know before you code, how do you code if you don't know the limit on what that model lead to? Are you saying you are going to code an AI language model without barrier? without limit? You don't follow any instruction architecture? If you are saying so, then I know for sure you lie about knowing how to code AI.
Machine learning is a broad concept, do you mean NN? Layered peceptrons? Which one dumbass. GA is an evolutionary approach, NN is a machine learning approach, both are AI. Different strategies, understand dumbfvk.


Yet 3 post on, you still had not explain nothing to me, how Machine Language is not related to Machine Learning or AI. In fact, I even show you an extract talking about Bit Matrix in AI and Machine learning from Cornell, and you still dig the hole bigger by saying it is not relevant? saying this is like saying ISA architecture is not relevant to AI or MIPS architecture is not relevant. lol......

Explain this please, if you can


I doubt you can tho
That's machine language? Hahahahaahahah. It shows how dumb you are. Unless you are fcking keanu reeves... Lolol. My man herenis a fcking genius to visualize codes in 1s and 0s MATRIX style.
Screenshot_20231115_200640_com.android.chrome.jpg

What data array? First of all, that's memory address is for data redundancy, and no, you don't need to know all 1000 or so of your data array because that's what the index bit is for. And that's what the check bit is doing, so I think it was you who sounded like a dumbass on the issue with no idea what you are talking about.
I don't need to know a single address in my program, understand, it is assigned by my compiler numb numb. What you mean by data redundancy, i can have data redundancy without even knowing any address dumbfvk. Dude, are you for real? And why do i need data redundancy for training input? Explain to me? You mean to tell me the millions of images used to train a model needs to be redundant? Hahahahah. Omg, you are a grand idiot. Redundancy? Create code to save input files into 2. Walah.. Hahaah
 
Last edited:
lol, I remember in Programming 101 (that's not the name, I forgot the actual name), the lecture said in the first lecture that computer look smart because it only do what the user tell them, and they can't deviate from said task which is what make them smart and stupid at the same time.
yes AI is more than Ccp wanting to monitor every chinese citizen when they take breakfast or dinner.
AI is some sorts like predicting the future. unless Ccp can predict what chinese want in 10 days? or how the Ukraine war will develop?
Google DeepMind develops a AI that can make faster, better and accurate weather 10 day forecasts than any other weather agencies.
in math, there is a term called probability. what is the likelyhood when event occurs? and how often? is it 1 or 99 percent?



1700049546678.png
 
yes AI is more than Ccp wanting to monitor every chinese citizen when they take breakfast or dinner.
AI is some sorts like predicting the future. unless Ccp can predict what chinese want in 10 days? or how the Ukraine war will develop?
Google DeepMind develops a AI that can make faster, better and accurate weather 10 day forecasts than any other weather agencies.
in math, there is a term called probability. what is the likelyhood when event occurs? and how often? is it 1 or 99 percent?



View attachment 1016772
Viet, i hope you understand you are getting educated by an idiot. I am serious!
 
Hahahaaha, sure sure. Until now you cannot answer me what is the relevance of machine codes for AI. You told me you need it to check for errors, i am telling you NOBODY checks machine codes for code errors.
Prototype limit, what limit are we talking about here? Computing power? Memory power?

Habahaahahh. What has putting variable into an equation has anything to using machine language for AI. FFS, you are going into incoherent mode again.


For someone who has never ever programed an AI algorithm, you are lecturing me? I am not a language model expert, i did image AI and GA optimization. Then explain to me how matrix transformation is used in AI? I am really curious. Lolol. I never needed those to code my algorithms, there are multiple strategies to achieve optimization.



Machine language is ONLY 101001, both opcode and operand are ONLY in 1s and 0s. Are you confusing it for assembly language? I just caught an idiot?!

View attachment 1016677



Okay genius. So explain to me how do you train a GA model?


Machine learning is a broad concept, do you mean NN? Layered peceptrons? Which one dumbass. GA is an evolutionary approach, NN is a machine learning approach, both are AI. Different strategies, understand dumbfvk.



That's machine language? Hahahahaahahah. It shows how dumb you are. Unless you are fcking keanu reeves... Lolol. My man herenis a fcking genius to visualize codes in 1s and 0s MATRIX style.
View attachment 1016838

I don't need to know a single address in my program, understand, it is assigned by my compiler numb numb. What you mean by data redundancy, i can have data redundancy without even knowing any address dumbfvk. Dude, are you for real? And why do i need data redundancy for training input? Explain to me? You mean to tell me the millions of images used to train a model needs to be redundant? Hahahahah. Omg, you are a grand idiot. Redundancy? Create code to save input files into 2. Walah.. Hahaah
typical @Han Patriot fashion.

Ignore all the question being ask, repeat the same thing over and over again. As long as repeating it long enough, then people won't see the other failure.

I have no time nor mood to continue with this, it's like arguing with a broken recorder. And you wonder why nobody here take people like you seriously :rofl: :rofl:

I have shown you my old student ID Card, unless you can show me some tangible evidence that you had even attended college, otherwise I am not interested in arguing with google
 
yes AI is more than Ccp wanting to monitor every chinese citizen when they take breakfast or dinner.
AI is some sorts like predicting the future. unless Ccp can predict what chinese want in 10 days? or how the Ukraine war will develop?
Google DeepMind develops a AI that can make faster, better and accurate weather 10 day forecasts than any other weather agencies.
in math, there is a term called probability. what is the likelyhood when event occurs? and how often? is it 1 or 99 percent?



View attachment 1016772
ah, the good old weather pattern......

The issue here is that you cannot predict event like this until you have enough data to basically dictate all the branch. This is about probability density, where a likely event will lead to more data being added to the picture, and the more data you added to the probability density, the more accurate the range is going to be.

Say for example, rain leads to both thunder and sunshine. In this case, both outcome are 50% (let's pretend that it's 2 data points) now if we add another instruction saying thunder lead to rain, and thunder lead to sunshine, then you will now have a 2 dimension matrix with more accurate result being predicted and so on and so forth.

AI in a way, can predict this by adding more data point, which lead to more probability of the next outcome, which leads to more data point.

CCP wouldn't want their AI to do that, because that mean if the entire notion of AI become automated, then where does that goes? Adaptive AI only ever goes in one direction, and the only thing any computer program cannot do is self-deleting instruction, because that would lead to instant crash of the system. Which mean if AI leads to an outcome that CCP don't want, there are no way for CCP to control it, unless they can monitor the zillion instruction that AI made to lead to this conclusion. It will take people 100 years to do that, a computer with Petaflop processing power can do it in hours.....
 
ah, the good old weather pattern......

The issue here is that you cannot predict event like this until you have enough data to basically dictate all the branch. This is about probability density, where a likely event will lead to more data being added to the picture, and the more data you added to the probability density, the more accurate the range is going to be.

Say for example, rain leads to both thunder and sunshine. In this case, both outcome are 50% (let's pretend that it's 2 data points) now if we add another instruction saying thunder lead to rain, and thunder lead to sunshine, then you will now have a 2 dimension matrix with more accurate result being predicted and so on and so forth.

AI in a way, can predict this by adding more data point, which lead to more probability of the next outcome, which leads to more data point.

CCP wouldn't want their AI to do that, because that mean if the entire notion of AI become automated, then where does that goes? Adaptive AI only ever goes in one direction, and the only thing any computer program cannot do is self-deleting instruction, because that would lead to instant crash of the system. Which mean if AI leads to an outcome that CCP don't want, there are no way for CCP to control it, unless they can monitor the zillion instruction that AI made to lead to this conclusion. It will take people 100 years to do that, a computer with Petaflop processing power can do it in hours.....
yes collecting data over long periods, detecting certain patterns, then predict the future.
the questions however is all weather forcast agencies have the same datasets, why google deepmind makes more precise predictions, even beats the top european weather agency?



 
I don't do coding that much, that's Software Eng job, I did hardware/software implementation, my programming skill are basically just enough for me to know my field of studies, which is as my reply to @Han Patriot, my field of study is more topological thinking than simply coding, which is what and how I describe ChatGPT functions and why I pick up how ChatGPT is revolutionary, because we take our brain for granted, and we don't think how we think and the procedure of problem solving, this is what I study at UNSW.

I did know about AI generating Software like Midjourney and Stable Diffusion, and I do understand the diffusion model (well, at least used to). The layman term for Diffusion model is like how you use Photoshop. If you ever used photoshop you will know there is a tool for you to smooth image. Think of MidJourney or Stable Diffusion is something like this, but in reverse. You have a polluted image with a lot of random bits on it, but you would still have some bits or more precisely pixel that you know it's coming from the original image, if not, the AI would simply generate another image and go on until it found what it was looking for, say for example, I want the AI to draw a human face, a human face would have 2 eyes (well, mostly) 1 nose, 1 mouth. So if the program knows what is an "eye" or what is a "mouth" then you can program it into making these pixels out because human parts, although different, it's still similar between each of us, I mean a nose cannot have 2 nostrils on top of the bridge, right? So these formed a set of rules for the computer to isolate those parameter out and filter what it was supposed to be like for a human face.

Now, you have a rough location of a mouth and a nose and 2 eyes, then you can use the association in between the pixel to try to isolate facial feature. And then you started to add hair, skin color, imperfection and so on and make a face, and thus drawing a human and then it keeps diffusing into more minute detail by adding and/or removing random pixel

The key here is the AI program knows what to look at. Then it set a rule (The above highlighted blue parts is the "Rules" for a human face) then you would be able to generate enough random pixel following this rule and then proceed to construct a human face. And you can then apply to other things as well, a car (How do you define a car, then set a rule over it and then AI could generate a picture of a car using said rules)

This is in layman term how AI generative program work using diffusion model.

I tried to install stable diffusion on my laptop because of curiosity, but my laptop is not strong enough. Hahaha

Seeing how long it takes to generate one image, I give up.


Yes, I understand Photoshop.

So basically, it's layering, layers of images and then combined together with some rules to make it even more real and believable.
 
typical @Han Patriot fashion.

Ignore all the question being ask, repeat the same thing over and over again. As long as repeating it long enough, then people won't see the other failure.

I have no time nor mood to continue with this, it's like arguing with a broken recorder. And you wonder why nobody here take people like you seriously :rofl: :rofl:

I have shown you my old student ID Card, unless you can show me some tangible evidence that you had even attended college, otherwise I am not interested in arguing with google

I actually answered your questions and was waiting for you to tell me how to program my algorithm. I am waiting for someone who has never dealt with AI before to come 'teach' me something i did 15 years ago.

Until now you still have not answered me how machine code is relevant to AI. Which was our initial argument anyway until you went on to rant about your dick size. Lolol

Why should i post you a picture of my student ID? I don't think i even kept it. I have nothing to prove here, normally only idiots need to prove themselves. Lolol. In my uni, dumb kids who don't qualify for Engineering or IT go for business courses.
 
Last edited:
I tried to install stable diffusion on my laptop because of curiosity, but my laptop is not strong enough. Hahaha

Seeing how long it takes to generate one image, I give up.


Yes, I understand Photoshop.

So basically, it's layering, layers of images and then combined together with some rules to make it even more real and believable.
You need more than laptop to run stable difussion.

Yes, it's basically layering, but instead of you go from one state to another. The AI in this case will "learn" from each datum point by adding something in the original image and then deleting it and keep repeating the process until the result is refined in their highest probability form, this way you build up the probability density and you can predict what is the next bit is going to be using Gaussian model

 
yes collecting data over long periods, detecting certain patterns, then predict the future.
the questions however is all weather forcast agencies have the same datasets, why google deepmind makes more precise predictions, even beats the top european weather agency?



Well, it's about how refined the probability density are, that directly related to processing power and also data point. As explained above, the micro-chain are built by processing 1000s of data using Gaussian model, for each process you increase the probability while decreasing the scope. Say a 2-point data, in X and Y axis, you apply Gaussian noise in the probability density (Z axis), then you will have a data point pointed to the future event. Again using the previous example and the example are mutually unexclusive (as I am not meteorologist, I know shit all about predicting weather so I can't do mutually exclusive data)

Thunder -> Rain
Thunder -> Sun
Rain -> Sun
Rain -> Thunder

We have a 3 dimension matrix to predict tomorrow weather. If it was Thunder today and Rain Yesterday, then the probability of Sun would have been the greatest (~75%) Because we already had both Rain and Thunder. Which mean we can eliminate both outcome that is not sun, because Rain and Thunder already happened. Which means the way it can go is either it continue to have Thunder, or it become Sunny.

Now, you can imagine if I have more processing power, I can put more weather data in it, then the probability density of the n-Matrix will be able to refine numerous flow. So it would not been the same if we run this AI Generation forecast with a home computer than a supercomputer.

It's all about variable and how you refine the outcome.
 
I actually answered your questions and was waiting for you to tell me how to program my algorithm. I am waiting for someone who has never dealt with AI before to come 'teach' me something i did 15 years ago.

Until now you still have not answered me how machine code is relevant to AI. Which was our initial argument anyway until you went on to rant about your dick size. Lolol

Why should i post you a picture of my student ID? I don't think i even kept it. I have nothing to prove here, normally only idiots need to prove themselves. Lolol. In my uni, dumb kids who don't qualify for Engineering or IT go for business courses.
lol, must have been sad for you to think you are all that but no one take you seriously, everyone else think you are a joke or a troll.

As I said before, I am more than gladly to claim to be stupid or idiot, because even if I am stupid or idiot, I still command more respect for the like of you that claim it to be, so if I am stupid and an idiot, then what are you? I mean look at this thread alone, both meaningful question that people raise are directed at this idiot, not you, and one of them are at your camp, so that there, saying something,

As I said, I don't really believe a word you say, you are no better than those stolen valor dude who said they fought XY and Z war on a totally irrelevant venue, they claim whatever anywhere but not stupid enough to show up in any VFW march. I mean for people like you, you can only claim you are a structural engineer, then an instrument engineer and now a computer programmer here in an inconsequential forum. I mean I won't see you at a University Campus claim this. Because you know you will get shut down immediately by the academics.

And as I said, I don't care what you claim, I show my proof, and to say you don't keep anything from your college day are absurd, everybody keep something, and if not, you at least will keep the letter of completion or the diploma itself, I even shown my UNSW Certification of Completion here with MH Yang when we exchange our certification of completion. So until you did the same, I will just say it's bullshit. And stop wasting my time on bullshit.
 
lol, must have been sad for you to think you are all that but no one take you seriously, everyone else think you are a joke or a troll.

As I said before, I am more than gladly to claim to be stupid or idiot, because even if I am stupid or idiot, I still command more respect for the like of you that claim it to be, so if I am stupid and an idiot, then what are you? I mean look at this thread alone, both meaningful question that people raise are directed at this idiot, not you, and one of them are at your camp, so that there, saying something,

As I said, I don't really believe a word you say, you are no better than those stolen valor dude who said they fought XY and Z war on a totally irrelevant venue, they claim whatever anywhere but not stupid enough to show up in any VFW march. I mean for people like you, you can only claim you are a structural engineer, then an instrument engineer and now a computer programmer here in an inconsequential forum. I mean I won't see you at a University Campus claim this. Because you know you will get shut down immediately by the academics.

And as I said, I don't care what you claim, I show my proof, and to say you don't keep anything from your college day are absurd, everybody keep something, and if not, you at least will keep the letter of completion or the diploma itself, I even shown my UNSW Certification of Completion here with MH Yang when we exchange our certification of completion. So until you did the same, I will just say it's bullshit. And stop wasting my time on bullshit.

The one eyed cyclop is the king of the blind. Try telling Engineers or Programmers they need to troubleshoot their algorithm at the machine code level and they would have dropped on the floor laughing. You can't even differentiate machine language and assembly language. You have no basic understanding of how AI strategies are implemented and you keep on talking to me about matrices. When i askd you about how to set neurons, what weightage to use, the criterion for the neuron, you cannot answer anything but just talk to me about machine learning. What type of machine learning are we talking about? NEURAL NETWORKS? PEECEPTRONS? You can't tell the difference? Do you understand evolutionary algorithms like GA? Particle Swarm? No, you didn't understand anything. You have the audicity to lecture me on data redundancy? Are you thinking this is some sort of Distributed Control System? Even then redundancy is achieved through hardware means like dual network and dual CPU approaches and these concepts have nothing to do with AI algorithm but industrial control. You want me to start on that? Lolol. Respect my arse, when someone had been bashed repeatedly by me and proven to be an idiot.
 

Back
Top Bottom