My method entails theorizing three cultural traits of the research pseudo-subculture (a specific product consumer base) that reside together in a paradoxical complex, according to what that subculture preserves, seeks, and builds. This structure is based on my theory of human nature which follows the same pattern (i.e., humans are comprised of internally paradoxical traits that guide our internal logic – our personality). A consumer base does not constitute an actual subculture, as most products do not garner cult followings. However, due to product competition, both at the marketing level and at the consumer decision level, an imagined subculture can be inferred.
Once theories are proposed for each product alternative, these traits can be entered into a prompt to generate a simulation with an AI application (ChatGPT for example). These simulations serve three purposes: 1. Brainstorming, 2. Looking for blind spots, and 3. Testing the theory. The simulations are analyzed by myself, and this human interpretation is used to then distill findings and recommendations as I see them.
FAQ #1: Whose perspective is the cultural theory from-the brand or the consumer?
Both. The idea is that the brand-consumer connection creates an ecosystem, a kind of tacit social contract, whereby both the brand and consumer get something of value provided the expectation (the style, the experience, etc.) is generally met. If the contract were explicit, cultural analysis would not be important in terms of business strategy, except in the case that consumers are not able to put their underlying expectations into words.
FAQ #2: Does the bias within AI hinder social simulations?
As long as AI is not used for creating the cultural theory (see FAQ #5 below), bias within AI is not a problem. In fact, if controlled, it actually helps the interpretation. This is because what I am looking for is the social pattern that I input, not the adjacent patterns that the AI application suggests. Bias within the AI can only hinder if it is undetected. Since generative AI is really just pattern recognition and generation based on mathematical models, undetected bias is unlikely. Recognizing, and controlling for, trends that are ubiquitous to the internet is obvious.
FAQ #3: Should the AI simulations be relied upon?
No. These are tools to be used for human consumption. Using AI to simulate social life should work to dispel the myth of actual “intelligence” within AI. ChatGPT and the like are simply tools. The more you use AI for social simulations, the more skeptical you should become.
FAQ #4: Why don’t you use cultural analysis on brand failures?
Crises generally do not present good conditions for studying culture because competitors and investors smell blood in the water and their money interest complicates messaging, and influencers in the general public want to get attention by piling on. In contrast to AI bias, which is generally easy to control for, human bias is much more difficult to control for. However, I will give my 2 cents on crises when I think it’s relevant and not mean-spirited.
FAQ #5: Why not use AI to generate market culture theories?
This is really where the bias within AI training data would be a problem. Asking AI applications to generate cultural theories is a problem because the AI application would simply reproduce what is prevalent in popular culture. A cultural theory should not be based on what is openly discussed in popular culture on the Internet because social traits are generally obscured intentionally within a culture. For example, it is commonly known that Americans are individualistic, and this is openly discussed. However, the breakdown of what that individualism actually means for Americans in real life is generally not discussed because it is uncomfortable for most people to discuss and look reflexively at their own limitations. This is true for every culture. The problem compounds when subcultures (or pseudo-subcultures) are analyzed, which is the purpose of this blog, just by virtue of the paucity of relevant data. Remember, there is a world in which Big Data equals Big Noise. Therefore it is imperative for the analyst to hypothesize a theory without the use of AI, after which AI can be used to simulate that human-made theory.
