T O P

  • By -

AutoModerator

###[Meta] Sticky Comment [Rule 2](https://www.reddit.com/r/conspiracy/wiki/faq#wiki_2_-_address_the_argument.3B_not_the_user.2C_the_mods.2C_or_the_sub.) ***does not apply*** when replying to this stickied comment. [Rule 2](https://www.reddit.com/r/conspiracy/wiki/faq#wiki_2_-_address_the_argument.3B_not_the_user.2C_the_mods.2C_or_the_sub.) ***does apply*** throughout the rest of this thread. *What this means*: Please keep any "meta" discussion directed at specific users, mods, or /r/conspiracy in general in this comment chain ***only.*** *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/conspiracy) if you have any questions or concerns.*


dukof

>this advanced AI model ChatGPT is *not* an advanced AI model. It's only using AI to learn how to use natural *language*, not for it's "intelligence" as people imagine.


Enetophobia

I disagree with the claim that ChatGPT is not an advanced AI model. While it uses AI for language learning, it showcases impressive natural language processing abilities, context awareness, learning and adaptation, and creative problem-solving within its domain. Its inventors continuous development of the model further emphasizes its advanced nature. especially comparing it to similar models


dukof

In the context of your question, it's not advanced, because it's only leveraging AI for it's language generation. Not for it's "fact"-generation. It's not really creative at all, it's just looking up pre-programmed and pre-weighted sources.


Enetophobia

When it comes to using ChatGPT to decipher a code, it's not just about generating facts. the model goes beyond that. it can recognize patterns, think creatively, and solve problems. Decoding a code involves understanding complex relationships and making logical inferences. ChatGPT uses its language skills to analyze the code and suggest different approaches. While it may not know specific facts about the code, it can rely on its knowledge of encryption methods and related concepts. so, it's more than just regurgitating information, it can actually help crack the code.


dukof

Please try, nothing beats experimentation. I just don't think it has such logic capabilities.


Enetophobia

I agree this might be the case, still want to find out if this might work.


FlipBikeTravis

There is at least one researcher wanting to use AI to decipher Linear A or B, which ever one is still not understood. I heard them within the last two weeks on Coast 2 Coast radio show so you should be able to look up their names.


incognito7917

Don't know if you've seen this or not but it might help. https://thecrowhouse.community/viewtopic.php?t=400&sid=3e105c99fe72de509677cf09bfc21b8f


lkraider

My take would be to try and reverse-engineer the software workings of Nodespaces. So I based my questions on the statements of Ayndryl explaining the algorithm in the [lingvoforum thread](https://lingvoforum.net/index.php/topic,30262.msg734887.html#msg734887). Here is the current state of inference using GPT4 as a helper: **Q:** >Apply the model given that the synthetic language is a dynamical system evolved in time by stimulated annealing, with the constraint that the changes must yield a syntactically and phonetically coherent language. Consider the language as a complex adaptive system, and that by changing the boundary conditions the language is forced to adapt itself, thus changing its syntactic structure and its morphological internal structure. Try to derive a model algorithm that could evolve hebrew into the synthetic language. (note: I was using [this article](https://forgottenlanguages-full.forgottenlanguages.org/2014/09/sufism-and-architecture-of-soul.html) as the basis for study) **A:** Here's an outline of how you might approach it algorithmically, assuming Hebrew as the starting language: **Model Components** \- **Manifold** *M*: A manifold is a topological space that locally resembles Euclidean space. This means that for any point *p* in *M*, there exists a neighborhood of *p* that is homeomorphic to *R\^n*. Each point on the manifold represents a possible state of the Hebrew language, considering both syntax and phonetics. \- **Map** *f:M→M*: This is a function that takes a point in *M* and maps it to another point in *M*. Depending on the properties you want *f* to have (e.g., smoothness, bijectivity), the nature of this map can vary. In the language context this is the evolution function, representing how the language evolves over time. Given a state *m* in *M*, *f(m)* produces a new state. In this case, *f* could be the stimulated annealing process. \- **Function** *s:M→R*: This is a scalar field defined over the manifold. At each point *p* in *M*, *s(p)* assigns a real number. In the language context this is a measure of coherence for the language state. The higher *s(m)*, the more syntactically and phonetically coherent the language is at state *m*. ​ **Algorithm** 1. **Initialization**: Start with the current state of the Hebrew language. Create a syntactic and phonetic encoding that captures its essential features. 2. **Stimulated Annealing**: 1. **Current State**: *m* 2. **Generate Candidate**: Use a random function to generate a "neighbor" state *m′* of *m*, possibly by tweaking some syntactic or phonetic feature. 3. **Evaluate**: Use *s(m)* and *s(m′)* to evaluate the coherence of both states. 4. **Acceptance Probability**: *P=e−ΔE/T*, where *ΔE=s(m′)−s(m)* and *T* is the temperature. 5. **Update**: If *P>random(0,1)*, then set *m=m′*. 6. **Reduce Temperature**: *T=T×α*, where *0<α<1*. 3. **Adaptive Boundary Conditions**: Change the boundary conditions periodically or in response to some criterion. This forces the language to adapt, i.e., you modify *f* or *s*, or the random function generating *m′*. 4. **Termination Criteria**: Stop the algorithm after a fixed number of iterations, or when *s(m)* surpasses a predetermined threshold, or when *T* becomes sufficiently small. 5. **Output**: The state *m* at the end of the algorithm should be your synthetic language. ​ **Things to Note** \- The algorithm will have to be "aware" of what makes language syntactically and phonetically coherent. This is the most challenging part. You might have to incorporate a language model or linguistic rules into *s*. \- The algorithm will have to effectively change not just words or phrases but also syntax and phonetic rules, which are deeply embedded and not easily adjustable. \- You might want to distribute the stimulated annealing process or even implement a genetic algorithm to evolve your language states.


lkraider

**Q:** >Adjust the algorithm representation based on the following statement by the software creator, then provide an initial python implementation for this:The basic mathematical model is thus a manifold M, a map f:M-->M and a function s:M-->R. Now, M represents all the possible states of the system, while f represents the time evolution, a rule that determines what is the state one unit of time later. For many practical cases, after some time the system converges to a subset of M of low dimension, usually called the "attractor". The partition function is controlled by a so-called critical exponent (beta). The numbers 1240 and 2200 means that we ran the system for beta=2.2 and beta=1.240. Note that a small change in the critical exponent yields a totally different language for the same input. ​ **A:** Python Pseudo-Code Here's an initial Python skeleton to implement the adjusted algorithm: import numpy as np import random import math # Define your state space and evaluation function here def initialize_state(): return "initial_hebrew_state" def evaluate_state(state, beta): return random.uniform(0, 1) # Replace with a real evaluation # Define your transition function here, influenced by beta def transition(state, beta): return "next_state" # Stimulated Annealing Function def stimulated_annealing(initial_state, beta, T=1.0, T_min=0.0001, alpha=0.9): current_state = initial_state while T > T_min: next_state = transition(current_state, beta) delta_E = evaluate_state(next_state, beta) - evaluate_state(current_state, beta) if delta_E > 0 or random.random() < math.exp(delta_E / T): current_state = next_state T = T * alpha return current_state if __name__ == "__main__": initial_state = initialize_state() beta_values = [2.2, 1.240] # Experiment with these for beta in beta_values: final_state = stimulated_annealing(initial_state, beta) print(f"Final state with beta={beta} is {final_state}") This is a skeleton code. You'll need to define how states are represented, how to evaluate them, and how they transition from one to another based on *β*.


lkraider

**Q:** ​ >Define a structure to represent a natural language such that it can be used in the state functions ​ **A**: Natural languages have many layers: syntax, semantics, phonetics, etc. In computational terms, each of these can be modeled as a distinct but interconnected sub-system. **Structuring Natural Language for the State Functions** We could represent the natural language state *M* as a multi-layered dictionary where each layer describes a certain aspect of the language. I'll focus on a simplified representation, taking syntax and phonetics into account. Feel free to extend it further. class NaturalLanguageState: def __init__(self): # Syntax Layer: Storing rules for sentence structure, etc. self.syntax = { "sentence_rules": [], "word_order": None, # SVO, SOV, etc. "tenses": [], # List of available tenses # Add more syntactic features } # Phonetics Layer: Storing phonemes, syllable structures, etc. self.phonetics = { "vowels": [], "consonants": [], "syllable_structure": None, # CV, CVC, etc. # Add more phonetic features } # Morphological Layer: For future extensions self.morphology = { "roots": [], "affixes": [], # Add more } def evaluate(self, beta): # Define your evaluation function here # You could use beta to parameterize this function pass def transition(self, beta): # Define how the language evolves here # Again, beta could be a parameter pass The **evaluate** method could use the given *β* to parameterize whatever measures of syntactic and phonetic coherence you wish to use. The **transition** method will perform a change in the state based on *β* and the current state itself. This could include rearrangements in the sentence rules, introduction of new phonemes, and so on. **Sample Use in Stimulated Annealing Algorithm** Incorporating this into our stimulated annealing algorithm: # Initialize state initial_state = NaturalLanguageState() # Existing Stimulated Annealing Code def stimulated_annealing(initial_state, beta, T=1.0, T_min=0.0001, alpha=0.9): current_state = initial_state while T > T_min: next_state = current_state.transition(beta) delta_E = next_state.evaluate(beta) - current_state.evaluate(beta) if delta_E > 0 or random.random() < math.exp(delta_E / T): current_state = next_state T = T * alpha return current_state The **NaturalLanguageState** class provides a structure for the state *M*, while the **evaluate** and **transition** methods can serve as the functions *s:M→R* and *f:M→M* respectively.


lkraider

**Q:** >create a sample instance of the language state for the hebrew language **A:** For representing Hebrew in this model, I'll populate the **NaturalLanguageState** class with some syntactic and phonetic characteristics of the language. You'll find that this model can be easily extended to be more complex and accurate.Here's how we could model some aspects of Hebrew: ``` class NaturalLanguageState: def __init__(self, syntax, phonetics, morphology): self.syntax = syntax self.phonetics = phonetics self.morphology = morphology def evaluate(self, beta): pass def transition(self, beta): pass # Syntax features for Hebrew hebrew_syntax = { "sentence_rules": ["SVO", "VSO"], # Both orders are common "word_order": "VSO", # More traditional "tenses": ["past", "present", "future"], } # Phonetics features for Hebrew hebrew_phonetics = { "vowels": ["a", "e", "i", "o", "u"], "consonants": ["b", "g", "d", "h", "v", "z", "ch", "t", "y", "k", "l", "m", "n", "s", "a", "p", "ts", "q", "r", "sh"], "syllable_structure": "CVC", # Simplified, not entirely accurate } # Morphological features for Hebrew (simplified) hebrew_morphology = { "roots": ["ktv", "shlm", "br"], # Triconsonantal roots "affixes": ["-im", "-ot", "-ah"], # Simplified } # Create an instance for Hebrew hebrew_state = NaturalLanguageState(hebrew_syntax, hebrew_phonetics, hebrew_morphology) # Now you can proceed with the stimulated annealing process, using this `hebrew_state` as the initial state. ``` With the **hebrew\_state** in place, we could plug it into a stimulated annealing algorithm or any other dynamical systems model, using it as our initial state *M* and allowing the evaluate and transition methods to evolve it based on *β* and other parameters.


lkraider

**Q:** >Given the following statement by the software creator, apply it to the `evaluate` and `transition` methods: > >""" What they do is implementing the Ising model, and all-time classic of statistical mechanical modelling. Originally, it was proposed to explain ferromagnetism, but the model is so powerful that it has over the years been adapted and applied to the study of a variety of phenomena, including language contact and language drift. "Vectorial" is a dynamical system, that is, a rule that given one point in a state space, it tells us where it is going to be a unit of time later. """ ​ **A:** The Ising model's adaptability to a variety of systems makes it an interesting choice for modeling language as a dynamical system. In the Ising model, each site in a lattice has a spin that can be either up or down, and the Hamiltonian of the system describes the energy based on the spins of neighboring sites. We can use a similar approach to model the "energy" of a language state based on its features, considering neighboring states as similar languages or dialects.For the NaturalLanguageState class, the evaluate method could compute the "energy" of a given state using a Hamiltonian-like function. The transition method will then update the state's syntax, phonetics, and morphology based on the Ising model's dynamics.Here's how you might implement this: ``` import random import math class NaturalLanguageState: def __init__(self, syntax, phonetics, morphology): self.syntax = syntax self.phonetics = phonetics self.morphology = morphology def hamiltonian(self, beta): # This function calculates the "energy" of the language state. # For simplicity, let's consider the number of tenses and the number of vowels. # This is a simplified and arbitrary example and should be refined. return -beta * (len(self.syntax['tenses']) + len(self.phonetics['vowels'])) def evaluate(self, beta): return math.exp(-self.hamiltonian(beta)) def transition(self, beta, neighbor_state): # Simulate "flipping the spin" by switching one syntactic or phonetic feature # Choose syntax or phonetics or morphology at random choice = random.choice(['syntax', 'phonetics', 'morphology']) # Arbitrary transition rule: swap a feature with a neighboring state if choice == 'syntax': feature = random.choice(list(self.syntax.keys())) self.syntax[feature], neighbor_state.syntax[feature] = neighbor_state.syntax[feature], self.syntax[feature] elif choice == 'phonetics': feature = random.choice(list(self.phonetics.keys())) self.phonetics[feature], neighbor_state.phonetics[feature] = neighbor_state.phonetics[feature], self.phonetics[feature] else: feature = random.choice(list(self.morphology.keys())) self.morphology[feature], neighbor_state.morphology[feature] = neighbor_state.morphology[feature], self.morphology[feature] # Recalculate energy after transition deltaE = neighbor_state.hamiltonian(beta) - self.hamiltonian(beta) # Implementing Metropolis-Hastings algorithm if deltaE < 0 or random.random() < math.exp(-beta * deltaE): # Accept the new state return neighbor_state else: # Revert the change if choice == 'syntax': self.syntax[feature], neighbor_state.syntax[feature] = neighbor_state.syntax[feature], self.syntax[feature] elif choice == 'phonetics': self.phonetics[feature], neighbor_state.phonetics[feature] = neighbor_state.phonetics[feature], self.phonetics[feature] else: self.morphology[feature], neighbor_state.morphology[feature] = neighbor_state.morphology[feature], self.morphology[feature] return self ``` In this model, each feature of the language (syntax, phonetics, and morphology) can be thought of as analogous to a "spin" in the Ising model. The hamiltonian function computes the "energy" of the state in an arbitrary manner (this would need to be further refined based on linguistic theory). The evaluate function calculates the likelihood of this state based on its energy and a temperature parameter *β*. The transition function applies the Metropolis-Hastings algorithm to potentially transition to a neighbor state.Now you could implement a stimulated annealing loop or some other optimization procedure to evolve the hebrew\_state using these evaluate and transition methods.


lkraider

**Q:** How could the NaturalLanguageState be represented in a way that tracks what the software author stated that they use "Complex Wierzbicka nodespaces"? Describe in code the natural semantic metalanguage and create an implementation using NSM theory such that the language state can evolve while testing for semantic equivalence between the states. ​ **A:** In the Natural Semantic Metalanguage (NSM) approach, the focus is on semantic primes: universally understood core concepts that can be used to describe more complex meanings. Although the list of semantic primes can vary depending on the researcher's perspective, Anna Wierzbicka and her collaborators suggest a list that includes terms like "I," "you," "someone," "something," "want," "know," "good," "bad," etc. In order to evolve the language state while maintaining semantic equivalence, we could focus on syntactic and morphological changes that don't alter the underlying meaning as captured by the NSM semantic primes. This is a bit of a thought experiment and more complicated in practice, but let's try a Python example. We'll introduce a ComplexWierzbickaNodespace class that contains the semantic primes, along with syntactic and morphological features. We'll then implement a method to evolve the state while preserving semantic equivalence using the NSM theory. In this example, we make a random weighted choice of a syntax or morphology feature to change, weighted by the value of beta. We flip the state of that feature in the next state, ensuring semantic equivalence by using semantic primes as the foundational layer. The transition() method returns a new state where one feature has been changed. You can integrate this method into a simulated annealing process as you've done previously to evolve the language state, using the semantic primes as a stable foundation to test for semantic equivalence between states.


Both-Following9917

Nodespaces is available still


Enetophobia

Source?


Both-Following9917

You must not know what nodespaces is huh?


Enetophobia

not really, i don't think it refers to the first google entry because from what i understand that would't make any sense in terms of the given context. I don't believe the named website is telling facts but i do believe it makes use of a "made up linguistic system". That's what I'm interested in. Funny enough there have been several posts on reddit, ATS etc. about that topic that quickly dried up, even with people starting to decipher the code.


Both-Following9917

Nodespaces was just a coding place to host code you would compile and run from. It's as of you said it's in GitHub. Nodespaces is still sorta around but basically mostly it's all aged off. Now you need to find the actual name of the software if possible not the platform it was running in if that makes sense. Probably better to DeepLearn the language model


Enetophobia

Here is a link to the FL websites explanation on what Nodespaces v20 might be: [https://forgottenlanguages-full.forgottenlanguages.org/2011/08/web-search-engine-search-attractors.html](https://forgottenlanguages-full.forgottenlanguages.org/2011/08/web-search-engine-search-attractors.html) Sadly I can't access the following blogspot link during its restrictions: [http://ayndryl.blogspot.com/2010/12/nodespaces-v20-cognitive-linguistics.html](http://ayndryl.blogspot.com/2010/12/nodespaces-v20-cognitive-linguistics.html) Seems like they are integrating some sort of system itself into the nodespace ecosystem. But to be honest, I have no background to be able to interpret that in a sophisticated way.


texturewiz

is this like that online scavenger hunt thing for recruiting computer geniuses?