DataPredict [Release 1.21] - General Purpose Machine Learning And Deep Learning Library (Learning AIs, Generative AIs, and more!)

I’m not talking about the simplicity of what your code can do, but rather I am referring to your codes itself.

Sir, that is not a Large-Language-Model. That is a bunch of “if” statements. I can provide you the mathematics for LLM if you want. LLM don’t use if statements 90% of the time.

It is designed to construct a table of observations.I never said its a LLM its a LLM utility. to give an LLM awareness of its surroundings in text format.

Table of observation? That is unheard from the LLM research scientists. Could you please elaborate more on that?

image
Sure it is somethign like this.
“Some of the limitations of large language models have been put down to them not being able to understand the physical world. Maybe it’s possible they can become more intelligent if they can embody and experience a digital one.”
OpenAI snaps up role-playing game dev as first acquisition (msn.com)

Please explain what’s that. I searched for “table of observation for large language model” in google and it returned nothing related to it.

I want to know the mathematics behind it, why it was created that way and so on.

Sure, so if you ask a large language model to take on the role of a character. They will be better if they know who/where the character is what’s surrounding it. For example with this you can provide the context as the previous response. or you can use something like GPT to expand on the context of the environment.

function conversationaldialogue(str,context,responses,model)
	local model=model
	local API_URL
	if model==nil then
		model=math.random(1,2)
	end
	if model==1 then
	 API_URL = "https://api-inference.huggingface.co/models/microsoft/GODEL-v1_1-base-seq2seq"
	else
	-- Define the URL and the headers for the request
	 API_URL = "https://api-inference.huggingface.co/models/facebook/blenderbot-400M-distill"
	end
	
	local headers = {
		["Authorization"] = Bearerkey,
		--["Content-Type"] = "application/json"
	}
table.insert(context,str)
	-- Define the payload for the request
	local payload = {
		inputs = {
			past_user_inputs = context,
			generated_responses = responses
		}
	}

	-- Encode the payload as a JSON string
	local payloadJSON = HttpService:JSONEncode(payload)

	-- Send the request and get the response
	local success, response = pcall(function()
		return HttpService:RequestAsync({
			Url = API_URL,
			Method = "POST",
			Headers = headers,
			Body = payloadJSON
		})
	end)

	-- Check if the request was successful
	if success then
		-- Decode the response as a JSON table
		local responseJSON = HttpService:JSONDecode(response.Body)

		-- Check if the response has a generated_text
		if responseJSON.generated_text then
			-- Print the generated_text
			return	(responseJSON.generated_text)
		else
			-- Print an error message
			print(response)
			return nil
		end
	else
		-- Print an error message
		print("Request failed: " .. response)
		return nil
	end

end

Also this module is a part of my multi-model AI system that uses old-school database searching.
So if the algorithm saw a enemy that is in the Bestiary. It would follow the observation by a description of the enemy from the databases

1 Like

First, what is this?

All you’re basically doing is sending texts to hugging face large language model. That’s not the mathematics.

Also, where’s the research paper on this “table of observation” on Google scholar. I’m pretty sure it can explain more than you do. I’m very interested on this term you made up.

1 Like

You seem to not be grasping what this is. You must not have much experience working with LLMs or you are just purposefully being apprehensive and dense.
For example you input into the LLM
local payload = {
inputs = {
past_user_inputs = {“What should you do?”},
generated_responses = {“I am Magus Art Studios,”}–
}
}


You may think this data was constructed by an LLM but it was not this is the input data to the LLM that gives it character context.
The LLM has this personality and knows that this is its surroundings. In this example the environment is very empty but the algorithm would construct a full observation if their was objects located in those directories. Then my chatmodule algorithm connects the observations with a string from the personality database then the output can be input into a large language model and it will love to roleplay as that character.

I will reiterate this.

You are not explaining the mathematics or give me research papers related to your version of “large language model” that I want to look into, because hey, if you know it, then it must be popular right?

You are blatantly giving an “explanation” based on a code. A code which sends text to hugging face LLM server and bring back the results. That’s not even explanation.

Also, you haven’t explain the mathematics behind “table of observation” related to LLM you have described. What is it? I want to know it. And explain it. If you can’t explain it, send me a research paper.

You’re a funny guy gaslighting. I think you’re adorable, here let me put in words you can not dwell on, “table of observations” a constructed database of string created by my algorithm that gives individual observations of closest objects and a full observation of the entire scene. These smaller strings are useful . But the main chunk is the paragraph entry.
I don’t need to provide you with anything you are very rude and demeaning and I do not appreciate your demeanor in this matter, you are being condescending.
You can exist in your world of exact research papers and not be a free thinking individual with a subjective thought to evaluate what you see in front of you, not to mention this module I graciously shared with the community is related to Text-Vision. As in Text based vision module.

Finally. took you long enough. So basically it just your own definition that isn’t known to the LLM community.

Why does it take that long to actually pull out that explanation and instead giving me random codes?

Because its not hard to understand, you misunderstood and are asking foolish questions, trying to be negative. It’s not hard to visualize what you can do with a LLM if it knows what character it is roleplaying as and what its surroundings look like, and if you were to ask it to create a commentrary based on those observations and character then it would. Do you comprehende that compadre?

Is DQN with experience replay in the stable version now? Just to be sure, your DQN includes the target network improvement as well, right?

Yes. Use :setExperienceReplay() function. Experience replay is disabled by default to avoid eating resources.

Also for your second question, yes. It will automatically improve it.

Okay, cool. It would be great if you could add extensions to DQN like double DQN, dueling DQN, and prioritized experience replay. Maybe even the rainbow DQN as shown in this paper: arxiv.org/pdf/1710.02298.pdf but that might be too much work.

Double DQN is relatively easy to implement though, it just requires changing the Q_Target when the next state is not terminal to:

I’ll just leave it here.

1 Like

Rainbow DQN? Very interesting name.

That being said, I don’t think I can add more stuff to it since I will be focusing on my work.

Thanks for your ideas though. I’d probably release it on the Release 1.3 Version.

For now, people should enjoy the stability of the library instead of having to adapt to newer ones. So maybe I’ll release it in a month (if I can still remember to do that).

2 Likes

Also, I made the code readable for you guys to modify to your own needs. So feel free to play around with it. For example, perhaps new optimizers or very interesting neural network variations.

Plus, my custom matrix library integrated into this DataPredict library should easily help you reach those needs.

Be sure to check the “API design” part.

1 Like

Hi, I looked at the code for QLearningNeuralNetwork module and I didn’t see any hyperparameter for the target network update frequency. Are you sure you included the target network improvement?

function QLearningNeuralNetworkModel:update(previousFeatureVector, action, rewardValue, currentFeatureVector)

	if (self.ModelParameters == nil) then self:generateLayers() end

	local predictedValue, maxQValue = self:predict(currentFeatureVector)

	local target = rewardValue + (self.discountFactor * maxQValue[1][1])

	local targetVector = self:predict(previousFeatureVector, true)

	local actionIndex = table.find(self.ClassesList, action)

	targetVector[1][actionIndex] = target

	self:train(previousFeatureVector, targetVector)
	
end

You seem to be using the same neural network that predicts the Q values to update itself. Also, you seem to be missing the “IsTerminalState” value stored for each experience with the following logic:

if IsTerminalState then
   Target = Reward
else
   Target = Reward + Gamma * Argmax a' Q(s',a',theta')
end

RL — DQN Deep Q-network. Can computers play video games like a… | by Jonathan Hui | Medium

Divergence in Deep Q-Learning: Tips and Tricks | Aman (amanhussain.com) Some interesting graphs I found online:

Data



image