Microphone schematics by chat GPT

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
No. Because what it output was completely unrelated to any circuit that could remotely work.

The Capacitor plague (I remember it well, I had a Dell guy on site swapping out motherboards for maybe 1,000 PC's) was so serious because the resulting products worked, but suffered rapid and reliable early failure.



Deep fakes are made by humans using certain tools. The problems are not the tools.

This is a parallel argument to the current debate of "banning" AR-15 rifles because some mass shootings (arguably high profile ones) used them.

We can look to china for an indication of the results. China completely banned private guns, not even airsoft toys are allowed. They then ruthlessly implemented their gun ban. Before that many full auto Chinese AK's were in private hands.

And ever since there have been zero school shootings or mass shootings.

What China has instead are massive numbers of knife attacks in public spaces and at kindergartens and schools invariably with multiple victims dead and other violent mass killings including fertiliser bombs and vehicles.

Fake News and what'snots are created by humans for their uses. Deep fakes are actually surprisingly harmless, next to what humans do without them.

Just take that lil New Jersey b!tch Sarah Bils aka Donetsk Devushka. No deep fakes, just an absolutely basic con with a fake identify. Before the internet she would even have showed her face and gone on TV talkshows. Now what she did caused massive damage to all sorts of things. And all to bilk a few pee-pull out of pocket change and I guess feel important.

Thor
"Deep fakes are made by humans using certain tools. The problems are not the tools."

The problems are made/caused by humans who create the tools of which AI is like a gun and, like guns, some AI will primarily have the purpose of killing humans which makes it more dangerous than humans.
 
Did a stint in the South Arican Defence Force serving the Apartheid regime(not by choice).

When I went to meet the elephant, on contract, in West Africa during an incredibly uncivil war, we had many Rhodesians and South Africans in our unit. On top of Russians and Ukrainians.

All of them, tough as nails guys. Made me feel inadequate except when operating the comms gear or taking long range shots with my Dragunov using armour piercing rounds I got in full belts from the machine Gunner.

Takes all kinds.

Just decanted the first 5 literjon of homemade pineapple wine and started drinking. Turned out well, quite fruity but tart and packs a punch.

Thor
 
No, it's not.

Google is better.

Thor

You're in for a surprise. Google will be "rearranging" its search algorithms in the next few weeks.

I have been studying Google since the very beginning and lately, it can't find a lot of content I know exists. Like interesting posts in fora. I know having read something about some gear, but it's drowned in "buy" links. That's been going on for at least a couple of years.

It's not that I expect it to be on page 1. There have been cases where I looked at every page in a 50 pages result, without getting to it.

I expect the changes won't please me. I guess the financial results are bad enough for the brass to do anything. A lot of Google employees seem to think so to.
 
The problems are made/caused by humans who create the tools of which AI is like a gun and, like guns, some AI will primarily have the purpose of killing humans which makes it more dangerous than humans.

I disagree. I have a fair tool set.

It includes a solid metal claw hammer.

It's incredibly useful for many jobs, but it can also quite effectively be used in killing humans.

I keep a .22 cal 10" barrel silenced pistol with green laser and red dot sight to keep varmints down. Also works on people. Guaranteed headshot at 40m.

I keep a scoped bolt action mauser for anything larger or longer. I also have a 70cm blade weed chopper, that makes a great improvised sword.

Now, mind you, I don't commonly go around killing people, but I have a substial set of tools that would allow me to change that.

And guess what, having the tools doesn't make me go and kill people.

Stop blaming tools for people's actions.

Thor
 
You're in for a surprise. Google will be "rearranging" its search algorithms in the next few weeks.

If it stops delivering results, I will use Quant.

I have been studying Google since the very beginning and lately, it can't find a lot of content I know exists.

Yes, it's a coarse strainer. Some google-foo can help. If you know the forum, use

site:groupdiy.com (example) for focus.

Thor
 
Yes, it's a coarse strainer. Some google-foo can help. If you know the forum, use

site:groupdiy.com (example) for focus.

Thor

Does this not just mean you have to understand how to use the tool you’re using? Maybe we just need to learn how to best take advantage of Chat GPT?
 
Also Chat GPT is in its early days, it’s bound to be at least a bit flawed, but I’m pretty convinced that with enough time technology like this is going to be very useful.

PS pineapple wine sounds interesting
 
If it stops delivering results, I will use Quant.



Yes, it's a coarse strainer. Some google-foo can help. If you know the forum, use

site:groupdiy.com (example) for focus.

Thor

That's fine if you can remember what forum or site it was on. Unfortunately, I can't usually remember that...

I've tried Quant. It's not great. It's not bad either, but it still has a long way to go.
 
Computers dont create , they follow a set of instructions ,
Squaddies get handed a clean up job after political flights of fancy ,
its not all that different in many ways is it ?
 
Last edited:
I think that everyone needs to understand that the chatGPT is an AI model specifically designed and trained for "Language".

To try and use it to give you a microphone circuit with specific values is a gross misuse of the chatGPT AI model.
And I would expect it to fail miserably at delivering technical information like you are asking from it.
It just wasn't designed or trained for that.

AI/ML models that are designed and trained for specific purposes can be very useful when used with 'human intelligence' to evaluate the results.
And evaluating them also means that often you will throw away many of the resulting model because it is not truly predictive or useful.

I've used AI/ML for a number of complex problems in the area of healthcare.
I think that I have ultimately used only about 5% of those models I have created.
Probably another 25% of them were very informative in giving me ideas for the design of other (nonAI) statistical models.

The bottom line is that you need to construct and train your AI/ML models for specific purposes and use them appropriately.

I believe that will always be true not matter how advanced this field becomes.
 
I think given the direction of this thread, it should probably be moved?

That said, given its direction into existential/philosophical/theological questions, I will speak to it as such. Apologies in advanced for breaking any forum rules. I think it's simply an interesting and important discussion.

The scary thing about AI is how many people embrace it and trust the results without fully understanding what it can and cannot do.
Particularly how the underlying data that it is trained on and subsequently used with can have inconsistencies, missing data, systematic biases, ...

I have been analyzing Mental Health and Substance Use disorders for over thirty years as an Epidemiologist trained in Mathematics/Statistics.
I have been using Machine Learning and AI (along with other analytic methods) since the mid 1990s, so I have first hand knowledge on how ML and AI can lead to erroneous answers if you data is not carefully cleaned and checked for problems.

So,
My biggest concern is how ML and AI is often used without proper 'Real Intelligence' guiding it's use.

This is what I would be worried about. Not of machine-learning or so-called A.I. itself, but how much anyone (who's willing to cut corners or b.s. their way through something) would rely on machine learning without any guidance. But if popular opinion continues its premature fascination (which looks a whole lot like idolotry/worship) with machine learning and pushes it onto a high enough pedestal, then who is to speak against it? I think we'll see it have more and more influence in spheres of life where we would rather not have it. I've even seen Reddit atheists spending quite a lot of time and energy at the keyboard talking about what it would be like to be ruled by an "A.I. god" with such a giddyness as if they are so ready and willing to relinquish their will to a blind and voiceless object/machine that we ourselves have made. Taking that into perspective, my hopes for a sustained healthy system of checks and balances are not very high.

Your statement "If that is "AI" we are safe, no need to stand at street corners with placards reading "The end is nigh"....', is naive. There are other applications of AI, such as deep fakes, which already significantly threaten humans. Seeing isn't believing anymore, neither is hearing.

This echoes my first paragraph, and why ultimately we can't have nice things.

No, because it's flaws are down to shoddy programming.

It is human like only because it was created by programmers who prefer juggling their loblox (as we say on planet anagramia) over writing code.

Mind you, it will make a great politician, lawyer, advertising executive and second hand car salesman.

Thor

Right. If machine-learning were confined to consumer/professional goods and marketing, then ok. Or even as a study tool for other professions. Its use in policy is what scares me the most though, because I'm convinced most politicians are not operating on any practical worldview level anymore and practicing any reasonable philosophical/pragmatic restraint (whether Christian or utilitarian if we're speaking of classic Western civilization). There isn't enough of a system of checks and balances left to keep quite a lot of terrifying things from becoming policy these days. In the West, it used to be a largely Christian and atheist/agnostic utilitarian debate (and those things could actually be debated on philosophical grounds), but now as policies (and people) are judged in the court of popular opinion outside of any concrete framework of worldview (or practical science for that matter), the point made in this thread regarding that machine-learning tends toward what would be most appealing is the scary part, especially when paired with any significant reliance on machine learning or A.I. in a policymaking context. Designs and goods don't affect everyone in and of themselves, but opinions and policies (and how those things can mandate specific designs or goods) DO affect everyone.

"Deep fakes are made by humans using certain tools. The problems are not the tools."

The problems are made/caused by humans who create the tools of which AI is like a gun and, like guns, some AI will primarily have the purpose of killing humans which makes it more dangerous than humans.

Further supports the points I'm making regarding human use and intent. Tools are tools, though. A gun is a static object disconnected from daily life if not in the physical hands of a person and physically handled according to the will of the handler. That could involve the will to hunt, protect, or kill. The problem is the person. Even tools meant for good (cars, computers, money, government) can and are abused for personal gain and/or cynical destruction. The same goes for A.I. and machine learning. They are ultimately static, but once they are connected more and more with infrastructure and used for policymaking and such like that (as humans relinquish their responsibility and will into the hands of machine learning or A.I.) that's where the problems will start. It still requires the will of the humans who develop it to set it down that path. Fear of A.I. sentience is second to fear of human intent/input into A.I. That's ultimately a matter of worldview and the human heart, though. That's not to say humans are the problem, though someone could easily convince/program so-called A.I. to view humans that way, and therefore use A.I. to target any human who is a "problem" - and you can see why this is a problem in and of itself. "Problem" can be defined in any way the programmer wants to define it, correct?

I think we'll continue this trend as we continue relinquishing our will to other people and objects. We don't care enough anymore about personal responsibility and a transcendent will and moral framework at all, let alone a transcendent will and moral framework in which to responsibily utilize things like machine-learning and A.I. We certainly don't care enough about debating those things anymore. I think most people are convinced of a particular way of living, even against their own health and flourishing, and the more responsibility they can relinquish, the better. Nevertheless, I do hold out hope that there are enough learned people in those communities who would be willing to draw boundaries and sound the alarms when necessary. Only time will tell.
 
More on topic of microphones and circuit design: as a layman without any education in electrical engineering (EE) whatsoever, I would still be inclined to educate myself first on circuit design and weigh multiple opinions from seasoned EE's when developing something. Machine-learning/A.I. - if and when good enough - could definitely be a good tool, but maybe best for additional ideas and problem solving. I would certainly hate to relinquish my own will and ability as an EE (if I were one) primarily to machine-learning and A.I. if safety is part of my responsibility as an EE. But again, the moral question rears its head again, regarding whether or not I owe anything to anyone else. I think the question is particularly pertinent to those working in the medical field.
 
Hi,

> I think we'll continue this trend as we continue relinquishing our will to other people and objects.

This trend should be vigorously opposed. What sets us humans apart from objects is our agency.
It is important we retain agency and take responsibility for our lives and actions.

We don't care enough anymore about personal responsibility and a transcendent will and moral framework at all, let alone a transcendent will and moral framework in which to responsibily utilize things like machine-learning and A.I.

I wish I could deny that these trends exist.

We certainly don't care enough about debating those things anymore.

Aeeehhhhmmmm, we are debating, no?

I think most people are convinced of a particular way of living, even against their own health and flourishing, and the more responsibility they can relinquish, the better.

Sheeps to the slaughter, christians to the lions! They failed Ethics 101 and OT1 levels. So they abandon the game. Feckless ejits and incels, if you don't play, how can you win?

Nevertheless, I do hold out hope that there are enough learned people in those communities who would be willing to draw boundaries and sound the alarms when necessary. Only time will tell.

I am more worried about those who useful idiots sound false alarms and are used to used to degrade human agency by TPTB, be they the "Ban AR-15" faction, the "Yellow, Red, White and Rainbow lives don't matter" faction all the way to "the end is nigh, AI are taking over, ban AI research" bunch.

All of them miss the fundamental facts of the tar baby principle, the simple fact that objects and colours lack agency and most critically "MYOB" & "TANSTAAFL" + Matthew 7:3 :

1683276221805.png



93 93/93

Thor
 
More on topic of microphones and circuit design: as a layman without any education in electrical engineering (EE) whatsoever, I would still be inclined to educate myself first on circuit design and weigh multiple opinions from seasoned EE's when developing something.

A smart AI would simply have picked the Schoeps Circuit and stated "electret capsule" as a quick get-out that works. It works for the Chinese...

Do for now Chinese more smart than ChatGPT.

Machine-learning/A.I. - if and when good enough - could definitely be a good tool, but maybe best for additional ideas and problem solving.

I think it could be used for bulk research, running simulations of different solutions and presenting the different solutions and results to a human to do what we do best, make decision (not always correct or reasonable) with limited information.

ML (as opposed to AI) can provide a condensed and maximally reduced essence of this huge soup of information we humans have generated and continue to generate, which gives the best foundation to the human to do the human thing.

A true AI would make the decision itself and I think at that point I would have to declare the Butlerian Jihad against Machine intelligence.

But again, the moral question rears its head again, regarding whether or not I owe anything to anyone else. I think the question is particularly pertinent to those working in the medical field.

Medical, Engineering, architectural engineering, airplane software writing...

Boeing would probably have used ChatGPT to write the 737MAX software had it been available, as would have Toyota (for that Toyota I was in when it got stuck on full acceleration and no brakes driving downhill a winding road on a mountainside)...

But are we really sure that Law practice, Advertising and Politicking do not fall under this?

The abject failure of the politico's in the free world to deter Putin have by now killed several 100,000 people in Ukraine (including RuSSian Zoldierz).



Actually, if you ask me, just being human falls under this, as we do own others something, always.

Someone, somewhere died so we may be live. Many more died so we may live free and standing up.

To deny such a debt is to deny life and freedom.

No man is island.



If nothing else we owe the molecules in our bodies and the energy to the Universe.

Thor
 
Last edited:
@thor.zmt

To answer briefly, I agree with most of what you said.

Agency, absolutely.

Debate, yes we are in fact debating. Culture at large is also debating, but what I mean is there's not so much debate anymore at an essential level (i.e. the real theological and metaphysical why of things). Most debate takes too much for granted (e.g. in the West, ethicists like certain Christian values, but try to divorce them from a necessary Moral Law Giver..i.e. without an absolute truth, ethics and behavior are arbitrary).

Bulk research and "reducing the soup" is exactly what I mean as far as machine learning and A.I. (as the term is currently understood) being super useful. Others have argued though that it only works if the soup is not spoiled.

As far as duty concerning the medical field, yes: include all other fields there too. I was just trying to make a point but didn't finish it out well. Stream of consciousness. But yes, all of those fields have an effect on everyone, and all of those fields are beholden to duty insofar as people are intrinsically valuable (imago Dei). But that's the root of the debate that is largely ignored.

On that point, I would disagree that we owe anything to "the Universe" if in fact the Universe simply is, without rhyme or reason, disembodied, uninvolved, and impersonal. Because we have the same problem that morality remains arbitrary as people are free to define "the Universe" and everything in it as they please. But I think we know enough about the Universe to know that it's not accidental, and even people who think that it is still insist on a particular morality and way of living. Which is a fool's errand if we're going to be honest.

Therefore, strip away a foundational absolute metaphysic and morality to govern us, and things like A.I. and machine learning also lack a framework in which they "should" operate. At that point, any criticisms of an apparent "misuse" of machine learning and A.I. is silly. But I think based on most people's concerns, no one truly believes that morality and duty and such are purely arbitrary, whether or not they choose to recognize the existence of a transcendent Moral Law Giver.
 
Bulk research and "reducing the soup" is exactly what I mean as far as machine learning and A.I. (as the term is currently understood) being super useful. Others have argued though that it only works if the soup is not spoiled.

For many spoilt items in the soup AI can serve as strainer. It can run source checks, simulations. The kind of thing computers are good at. Computers don't beat chess masters by out-intuiting them, but by simply running thousands of branches of the game from the current position to determine the optimum move.

Therefore, strip away a foundational absolute metaphysic and morality to govern us, and things like A.I. and machine learning also lack a framework in which they "should" operate.

Three laws of robotics, Isaac Asimov.

At that point, any criticisms of an apparent "misuse" of machine learning and A.I. is silly. But I think based on most people's concerns, no one truly believes that morality and duty and such are purely arbitrary, whether or not they choose to recognize the existence of a transcendent Moral Law Giver.

This get's us into a philosophical morass that I'd rather avoid.

I would still suggest that making a sentient anything is a thing that should be avoided. Look where it got us when Primum Movens gave eXistenZ to a universe that allowed such a thing.

Thor
 

Latest posts

Back
Top