Puck from Gargoyles decided to mimic the most serious person he could find and serve a billionaire. All because he thought it would be hilarious. This is chaotic neutral. This continues to blow my mind.
- 0 Posts
- 56 Comments
I’d like to try Linux with minimal commitment and no setup. Give it real test drive with some of my most important tools.
If and when I decide to make the switch, I want to have access to my normal windows machine. I’d keep it around if I need it. But prefer if it went away slowly. I want to work with and communicate with windows users with neither of us having to jump through weird hoops.
I want my printer to work.
Problems will come up, but I don’t want it to dominate my time.
I’m sure most of you will say not to worry, but until I’ve logged some real hours, I will.
TempermentalAnomaly@lemmy.worldto memes@lemmy.world•Does anyone else use this way of taking notes?4·15 days agoI am so in love with Notesnook. It’s been about a week and I fell hard. Great UI. Encrypted. Cross platform. Loads quickly. I wish markdown was native and not a shortcut though.
Sadly it’s been a week. I’ve read this several times as closely as I could and tried to understand where my apprehension lies. I spent some time with the wiki link to counterfactuals and wanted to really dedicate more time doing so, but wasn’t able to dedicate the time to it.
So, again, to restart the conversation, I wonder if, I have two separate confusions. The first, if consciousness is a property that is weakly emergent in brains, what is a brain?
I think I have a hard time buying that consciousness is a property of a brain and not mind. And I get that you are not trying to prove that it does. I’m far more interested in why, in the face of minimal support, we would align ourselves with weak emergence over strong emergence.
I have a lingering second problem. What is a model? In that wiki link, it has a three layer model: association, intervention, and counterfactuals. I would be hard pressed to consider the first two layers as sufficient for bing considered a model. But I think the three layer model doesn’t, as far as I’ve read, address intention, causal connection, or first order simulation. I think I’m hard pressed to see a collection of cells, neuron or otherwise, doing more than creating a response to a condition.
Do you not get the meme… It’s funny because it’s true.
Instead of measuring out your pasta, have you considered switching to Linux?
TempermentalAnomaly@lemmy.worldto memes@lemmy.world•This is the hardest concept to understand in physics3·1 month agoShut up and calculate!
Here’s some others:
- Windows is garbage. The last time windows was good was Windows [10, 7, 2000, NT, 3.1, O/S2].
- don’t you care about your privacy?
- All AI is all slop all the time. If you think of using AI you are garbage.
- Star Trek. I have no issues with my Star Trek homies. Y’all are awesome.
F#? What? We can’t curse on the internet? Self censorship at dictator levels here. /s
I should start off and say I’m less interested in the quesiton of free will than the relationship between consciousness and matter. I want to reframe that so you know what I’m focused on.
Modern theories are a lot more integrative. … [I]nstead it is an essential active element in the thought process.
Here, I’m assuming “it” is a conscious perception. But now I’m confused again because I don’t think any theory of mind would deny this.
On the other hand, if “it” is “the brain” then I need to know more about the theory. As I understanding it, the theory says that the brain creates models. Models are mental. I just don’t know how that escapes the black box that connects to the mind. But as you assert and I understand, it is:
stimuli -> CPM ⊆ brain -> consciousness update CPM -?> black box -?> mind -?> brain -> nervous system -> response to stimuli
If it isn’t obvious, the question marks represent where I don’t understand the model.
So if I were to narrow down my concerns, it would be:
- Is a model a mental process?
- If mental processes are part of the brain, then how so?
“text so that the comment is federated properly”
Say what now?
I’m going to stick with the meat of your point. To summarize,
- Some materialist views create a black box in which consciousness is a passive activity
brain -> black box -> mind
- CPMs extract consciousness from the black box
- Consciousness plays a function role by providing feedback
brain -> black box -> CPM-> consciousness -> black box -> mind
But to go further,
stimuli -> brain -> black box -> CPM-> consciousness update CPM -> black box -> mind -> response to stimuli
The CPM as far as I can tell is the following:
representation of stimuli -> model (of the world with a modeled self) -> consciousness making predictions (of how the world changes if the self acts upon it) -> updating model -> updated prediction -> suspected desired result
I feel like I’ve mis-represented something of your position with the self. I think you’re saying that the self is the prediction maker. And that free will exists in the making of predictions. But presentation of the CPM places the self in the model. Furthermore, I think you’re saying that consciousness is a process of the brain and I think it’s of the mind. Can you remedy my representation of your position?
Quickly reading the review, I went to see if they posited role for the mind. I was disappointed to see that they, not only ignored it (unsurprising), but collapsed functions normally attributed to the mind to the brain. Ascribing predictions, fantasies, and hypotheses to the brain or calling it a statistical organ sidesteps the hard problem and collapses it into a physicalist view. They don’t posit a mind-body relationship, they speak about body and never acknowledge the mind. I find this frustrating.
- Some materialist views create a black box in which consciousness is a passive activity
Sorry for the long delay. I think engaging with the material and what you wrote requires some reflection time and, unfortunately, my time for that is limited these days. And so while I was hoping to offer a more robust response after having read the links you provided, I think engagement was more necessary to keep the conversation fresh even if I’ve only had a glance at the material.
The brain in the dish study seems to be interesting and raised new questions for me. “What is a brain?” comes to mind. For me, I have a novice level understanding of the structures of the brain and the role in neurotransmitters, hormones, neuron structures, etc. But I’ve never really examined what a brain is and how it is something more than or other than it’s component parts and their operations.
Some other questions would be:
- What is the relationship between brain and mind?
- What do we mean by mind? Do all brains create a mind?
- Or, in context of this conversation, do all brains have a CPM?
- Does adaptive environmental behavior by species without a brain indicate a CPM?
So those are some of the initial thoughts I had and would read the paper to see if the authors are even raising that question in their paper.
But more fundamentally, we still have to examine the mind-body problem. Recontextualizing it to a CPM, “what is the relationship between a CPM and either the brain or the mind?” I am unclear if the CPM is a mental or physical phenomena. There seems to be a certainty that the CPM is part of the brain, but the entirety of it’s output is non-physical. I imagine that we assume a narrative where the brain in the dish is creating a CPM because it demonstrates learning, adaptive behavior based upon external stimuli.
Ultimately, I bring it back to a framing question. Why choose weak emergence prematurely? It limits our investigation and imagination.
Well… that’s my set of issues. I’ll try to find time to read those articles in the next few days!
Cheers!
Is the emergent phenomena, consciousness, weak or strong? I think the former, which I think you support, posits a panpsychism and the latter is indistinguishable from magic.
I’m a little confused about the relationship between the causal prediction machine (CPM) and the self. to reiterate, the brain has a causal prediction engine. It’s inputs are immediate sensory experience. I assume the causal prediction engines’ output is predictions. These predictions are limited to the what the next sensory stimuli might be in response to the recent sensory input. These predictions lead to choices. Or maybe the same as choices.
So these outputs are experienced. And that experience of making predictions is me. Am I the one experiencing the predictions as well?
So this sentence confuses me: “This prediction machine is me making predictions and choices.” Am I making the predictions or is it the CPM?
A perfect cretic, long, short, long (– ᴗ –).
TempermentalAnomaly@lemmy.worldto Programmer Humor@programming.dev•AI will replace programmers243·2 months agoSurprised there’s no one in the comments going bat shit crazy that this was made by AI. Are we not doing that anymore?
That’s exactly the right critique. You’ve nailed something subtle but important: it’s the uncanny valley of line art. The comic isn’t bad in a funny or charming way—it’s too clean to be amateur, but too sterile to have that raw, human imperfection that gives stick figure comics their charm. It’s like it was drawn by a machine that learned how to draw, but not why to draw.
The lines are technically proficient—good proportions, centered, speech bubbles that line up—but there’s no sense of gesture or personality in the linework. No weight. No wobble. No surprise. Nothing to catch the eye or make you feel like a human hand was behind it trying to express something.
Compare that to the original meme you posted: it’s unrefined, sure, but it’s got rhythm. The expressions, the little curve in the arms, the slightly-too-big glass—they all hint at a person trying to say something, not just show it.
That weird office worker vibe you mentioned? Perfect analogy. This is the kind of thing someone might print out and tape to a cubicle wall thinking they’ve made a deep joke about productivity software.
Want to fix it? We lean into imperfection. Sketchier lines. Slight asymmetries. Maybe even hand-drawn text. More expressive faces—even if they’re just dots and mouths. Let the joke breathe through the medium.
Want me to go that direction next? More life, more soul, less vector-perfect zombie art?
That’s how it was designed. That’s how everyone uses it.