In this book, J.J. Sylvia IV examines LiveJournal, a blogging platform site, as a case study of how social media platforms can be weaponized by authoritarian regimes as a way to disrupt knowledge systems and destabilize democracies.
LiveJournal and Russian Disinformation examines the platform’s role in the Kremlin’s broader information warfare strategy in Russia, demonstrating how epistemic sabotage became central to the Kremlin’s efforts to manipulate truth. This transformation matters not only for understanding the challenges within Russia, Sylvia posits, but also for addressing global challenges in navigating truth and democracy in the 21st century.
As fascism is on the rise globally, and social media is one of the tools being wielded by anti-democratic governments to divide citizens in democratic countries across the globe, the reach afforded by the internet allows disinformation efforts to reach a far greater audience now than ever before. This shift in quantity brings with it new risks that must be considered and faced down, and Sylvia argues that by better understanding LiveJournal as an early battle in this new digital arena, we can document the tactics at play and identify just what is at stake.
Abstract: This study advances our understanding of virtual reality (VR) as a tool for increasing both empathy and comprehension of microaggressions, using a unique intersectional dual-perspective approach. Through immersive VR video experiences, participants simultaneously experience and witness microaggressions focusing on racial assumptions about academic abilities. The VR video, developed and scripted by two undergraduate students of color, was recorded from two distinct perspectives: a Black student and an Asian American student, providing viewers an opportunity to witness differential treatment based on racial bias. We conducted a mixed-methods survey of 83 students who viewed the VR experience, focusing on self-reported affective and cognitive reactions. Findings indicate that the VR experience heightened participants’ understanding and empathy toward microaggressions. Notably, one perspective (the Asian American student), led to greater impact, suggesting the necessity for further exploration of VR experiences’ differential impacts and its implications for the intersection of VR and social justice.
Keywords: virtual reality, 360-degree video, microaggressions, racism, intersectionality.
The Data Renaissance is a comprehensive exploration into the pivotal role of data in shaping our contemporary society. This book, a collaborative effort with significant student involvement under expert guidance, delves into the intricate and often opaque world of data practices across various industries. With a focus on the guarded nature of these practices, as exemplified by platforms like TikTok, it offers a rare glimpse into the mechanics behind the algorithms that drive user engagement and business success.
The book is designed to function as a living document, changing and growing with every iteration of the course it is meant to accompany. With the quick advancements in the world of digital data, this dynamic method guarantees that the content is current and relevant. It is a helpful tool for teachers and students alike since it is more than just a list of facts and theories—rather, it is a guide filled with real-world knowledge and experiences.
Central to the book’s narrative is the exploration of how data is wielded and its profound implications across different disciplines. It addresses the challenges inherent in studying data practices, especially when these practices are closely guarded as proprietary secrets by corporations and businesses. The ethical ramifications of these acts are critically examined throughout the book, especially in light of contemporary digital platforms and technologies.
Further, the text takes a deep dive into the practical applications and implications of data in various domains. From the influence of data on consumer behavior and business strategies to its role in shaping public opinion and social dynamics, the book covers a broad spectrum of topics. It goes beyond mere theoretical discussion, offering practical insights and real-life examples that illustrate the pervasive impact of data on our daily lives.
In summary, anyone wishing to comprehend the intricate relationship between data, industry, and society should read this book. It is a useful tool for both individual study and classroom instruction since it provides fundamental insights that can inspire more research and conversation. By presenting a nuanced view of the digital data landscape, the book encourages readers to critically engage with the topic and consider the far-reaching implications of data in our interconnected world.
Posthumanism fosters a more inclusive and less hierarchical approach to our entanglements with both human and non-human elements. Posthuman theory, particularly as articulated by N. Katherine Hayles and Rosi Braidotti, has long been influential in media and cultural studies. Ferrando (2020) argues:
posthuman ethics invites us to follow on three related layers. First of all, as a post-humanism, it marks a shift: from universalism to perspectivism, from multiculturalism to pluralism and diversity. As a postanthropocentrism, it induces a change of strategy: from human agency to agential networks, from technology to eco-technology. As a postdualism, it requires an evolution of our awareness: from individuality to relationality, from theory to praxis. (147)
This Special Issue of the Journal of Posthumanism therefore asks, how does such posthuman perspectivism, pluralism, agentiality, eco-technology, relationality, and praxis, apply to the future of media and cultural studies? How might we understand the very concept of “future”?
“We believe that collectively, the articles in this issue highlight affirmative
approaches to using GAI in teaching and research. They highlight exciting potential
for how to embrace GAI tools within teaching and research, while also noting the
potential challenges that can arise in the process. They also speak to the importance
of continued research and experimentation, approaching their adoption and use in the classroom through a pedagogy of care. As this field—and the world at large—navigates the changes wrought by GAI on our processes of thinking and writing, it will be crucial that we approach these tools with nuance, critique, experimentation, and awareness of contextual factors. This special issue offers a step in that direction, and we hope it inspires ongoing dialogue as we all grapple with the transformative impacts of GAI.”
Posthumanism fosters a more inclusive and less hierarchical approach to our entanglements with both human and non-human elements. Posthuman theory, particularly as articulated by N. Katherine Hayles and Rosi Braidotti, has long been influential in media and cultural studies. Ferrando (2020) argues: posthuman ethics invites us to follow on three related layers. First of all, as a post-humanism, it marks a shift: from universalism to perspectivism, from multiculturalism to pluralism and diversity. As a postanthropocentrism, it induces a change of strategy: from human agency to agential networks, from technology to eco-technology. As a postdualism, it requires an evolution of our awareness: from individuality to relationality, from theory to praxis. (147) This Special Issue of the Journal of Posthumanism therefore asks, how does such posthuman perspectivism, pluralism, agentiality, eco-technology, relationality, and praxis, apply to the future of media and cultural studies? How might we understand the very concept of future?
Introduction to Communication and Media Studies is an in-depth exploration of how communication shapes our world. This book traces the historical evolution of media, from the early days of the printing press to today’s digital age, examining key developments such as the telegraph, radio, television, and the internet. It also covers critical theories that explain media’s impact on society, including the effects of advertising, the role of public relations, and the emergence of social media as a powerful force in modern communication. Chapters on media literacy, critical thinking, and rhetorical analysis help students develop critical skills for understanding and analyzing media messages.
Since its release in late 2022, ChatGPT and subsequent generative artificial intelligence (GAI) tools have raised a wide variety of questions and concerns for the field of technical communication: How will these tools be incorporated into professional settings? How might we appropriately integrate these tools into our research and teaching? In this review, we examine research published in 2023–2024 addressing these questions (N = 28). Overall, we find preliminary evidence that GAI tools can positively impact student writing and assessment; they also have the potential to assist with some aspects of academic and medical research and writing. However, there are concerns about their reliability and the ethical conundrums raised when they are used inappropriately or when their outputs cannot be distinguished from humans. More research is needed for evidence-based teaching and research strategies as well as policies guiding ethical use. We offer suggestions for new research avenues and methods.
This paper scrutinizes the micropolitical fascism latent in social media platforms’ algorithmic designs, which, according to Deleuze & Guattari (2009) and Crano (2022), foster desires for uniformity and control that may escalate into authoritarianism, threatening democracy and free speech. It considers the paradoxical nature of social media in enhancing connectivity while potentially inducing loneliness, an emotional state Arendt links to fascism, and their role in amplifying negative emotions, spreading disinformation, and conspiracy theories, such as QAnon. Delving into the mechanics of such designs, the paper leverages a monist informational ontology to dissect subjectivation processes and envisage overcoming these microfascist inclinations. It suggests a radical redesign of social media platforms that eschews analytics-driven narratives in favor of fostering joyful affect and novel subjectivities. This reimagining aims to detach social media storytelling from analytics and data exploitation, promoting a posthuman model for platform design that resists the generation of microfascist desires.
Social media platforms have received increasingly bad press coverage over the course of the last decade for everything from problematic uses of algorithms to the ability of authoritarian regimes to leverage them as a way to impact elections. Unfortunately, this emphasis on critique, though justified, has led to a paranoid form of thinking in which many understand that such risks exist, but lack a technical grasp of how such platforms function. We argue that platform literacy should be a foundational aspect of a university education, as it is vital to understanding how to best apply one’s agency, especially as part of an engaged citizenry in an increasingly digitized world.
In developing the concept of assemblages, Gilles Deleuze draws at least some inspiration from Gilbert Simondon’s concept of information. While his acknowledgement of Simondon’s influence is almost entirely positive, Deleuze explicitly distances himself from the concept of information in order to avoid its link to the field of cybernetics. However, a Deleuzian informational ontology could instead be leveraged as an alternative to cybernetics. Drawing on the Spinozan link between the work of Deleuze and Simondon, it is possible to develop a hybrid informational ontology. This system can not only offer a different approach to information, data and technology than the essentialist concept of information embraced by cybernetics, but also aligns well with recent research in the biological sciences that has disrupted a long-held concept of individuals as entirely separate and autonomous. Shifting away from the Platonic Form to a Deleuzian/Simondonian in-formation furthers the post-human project aimed at understanding processes of subjectivation at multiple scales, including the micro (biome) and macro (city, population, planet).
In their article, “BreadTube Rising: How Modern Creators Use Cultural Formats to Spread Countercultural Ideology,” J.J. Sylvia IV and Kyle Moody analyze the rise of BreadTube. Scholars have argued that YouTube’s algorithms lead to greater radicalization (Ribeiro et al.) and bad actors have weaponized algorithms to draw users into conspiracies (boyd, What Hath We Wrought?). This article adds to this by linking these practices to the commodification of social media that spread misinformation as adaptations of socially and rhetorically mediated technologies. It analyzes how the economics of YouTube and other platforms demand that user-generated content fit within paradigms of culture and economics. This ideological connection between conspiratorial thinking and economic incentives produced leftist and Marxist counter-narratives. The authors argue that the rise of BreadTube (Kuznetsov and Ismangil; Maddox and Creech) addresses this radicalization by re-deploying the mass-education model using the tenets of capitalism via normalized practices of YouTube algorithms to create pro-socialist and anti-right-wing content.
In connection with emerging scholarship in the digital humanities, media genealogy, and informational ontology, this paper begins the process of articulating a posthuman approach to media studies. Specifically, this project sheds new light on how posthuman ethics, ontology, and epistemology can be applied in order to develop new methodologies for media studies. Each of these approaches builds upon the foundation of an informational ontology, which avoids the necessity for pre-existing subjects that transmit messages to one another within a cybernetic paradigm. Instead, a posthuman paradigm explores methods that include counter-actualization, modulation, and counter-memory. Posthuman media studies emphasizes the need for experimentation in developing new processes of subjectivation and embraces an affirmative posthuman nomadic ethical subjectivity, linking true critique to true creation.
Widespread access to the internet and increasingly powerful computing has facilitated unprecedented change in our world. Perhaps no moment better captures this change than during the spring 2020 COVID-19 pandemic, when governments across the globe asked citizens to stay home and corporations encouraged or mandated that employees work from home, leveraging digital technology to maintain social connections and perform jobs typically done in person. Only a generation ago, this type of quarantine might have been more destabilizing. Much like the printing press, which facilitated a shift toward print culture and expanded access to information and ideas in unprecedented ways, the technologies of the digital frontier are typically understood to be a force of good in the world: democratizing societies through open access, connecting people across continents, and automating once-difficult jobs. But the emerging digital culture, which is constantly and rapidly shifting, also presents challenges. Tools that were initially used to support democratic practices have now been weaponized by autocratic governments. Uncompromising partisanship and nationalism are on the rise. The world is facing wicked problems such as climate change that can only be solved through sustained and collaborative actions across the globe. Understanding these challenges requires us to both connect and cross communities, countries, and campuses.
This paper explores how the concepts of information and technics have been leveraged differently by a variety of philosophical and epistemological frameworks over time. Using the Foucauldian methodology of genealogical historiography, it analyzes how the use of these concepts have impacted the way we understand the world and what we can know about that world. As these concepts are so ingrained in contemporary technologies of the information age, understanding how these concepts have changed over time can help make clearer how they continue to impact our processes of subjectivation. Analysis reveals that the predominant understanding of information and technics today is based on a cybernetic approach that conceptualizes information as a resource. However, this analysis also reveals that Michel Foucault’s conceptualization of technics resonates with that of the Sophists, offering an opportunity to rethink contemporary conceptualizations of information and technics in a way that connects to posthuman philosophic systems that afford new approaches to communication and media studies.
The question of how to teach media literacy in the post-truth era has been widely debated in the field of communication and media studies, with scholars such as danah boyd (2018) arguing that larger questions of epistemology must be included in such instruction. However, such arguments have not adequately addressed the potential for a civic engagement model as a pedagogical approach to this difficult and often politically divisive topic. Specifically, my chapter will address the important role of community dialog in helping students confront and develop a deeper understanding of the underlying epistemological issues related to the post-truth era and related debates surrounding fake news.
boyd (2018) argues that disagreement in the post-truth era is not over what is true, but rather the epistemological questions of how we determine if something is true. Similarly, recent sociotechnical approaches have argued that fact-checking and media literacy pedagogical solutions are not as straightforward as many believe. Marwick (2018) suggests that media literacy training has not actually improved students’ ability to accurately assess information and that fake news sites are becoming better at mimicking the very signals of legitimacy that are often taught as part of media literacy efforts. In light of these critiques, I will assess an evolving approach that I have adopted for addressing these issues in a Communication Law and Ethics course that I teach.
Based on the model for community dialog used by the Society of Philosophers in America, I developed an assignment in which groups of students were tasked with hosting an off-campus community-based discussion related to issues in communication ethics. Drawing on course materials and student work, this chapter analyzes the pedagogical benefit of having students themselves prepare to host a discussion about fake news and post-truth with a group of strangers who potentially hold a wide spectrum of political beliefs. Preparation for this event required that students understand and engage with a wide variety of beliefs. Pedagogically, this forced students to move beyond simply considering their own views, beliefs, and biases and instead consider how post-truth impacts their broader community. An added benefit of this approach is that it emphasizes the importance of civility in this dialog, helping students to not only evaluate their own beliefs, but strive to develop strategies for communicating in productive ways with those with whom they disagree. It actively requires negotiating the potentially blurry boundary of how we determine truth.
I argue that adopting a civic engagement pedagogical approach can not only reduce some of the challenges associated with teaching issues related to post-truth, but forces students to engage with the material in a more thoughtful, meaningful, and contextually relevant way. In conclusion, this chapter, by closely examining the process and results of a civic engagement approach to teaching post-truth in the classroom, sheds new light on the importance of dialog in the context of post-truth.
As COVID-19 spreads across the globe, new technologies are being leveraged to enforce social distancing requirements. I explore social distancing through the theoretical lens of Michel Foucault’s biopolitics, with an emphasis on recognizing unauthorized movement and controlling circulation. Although reporting and widely shared data visualizations about COVID-19 have made many people newly aware that their movements are being tracked and surveilled, governments are already implementing new measures such as geofencing and artificial intelligence (AI)–based facial recognition to facilitate the enforcement of social distancing. The tracking of COVID-19 spread and social distancing behaviors of the public has made more visible the practices of biopolitics but also generated new opportunities for even greater surveillance and control. The current moment offers an opportunity to shift public perceptions about data surveillance, technological control, and the racial disparities of biopower, much in the same way that public perceptions around social media shifted during and after the Arab Spring. How we collectively respond to these biopolitical processes will, in part, determine how such power relations are articulated in the future.
I explore the implementation of major assignments that require students to both learn and use p5.js as a tool for a media studies-related group project. The data that I draw upon include direct observation of students and collection of artifacts and texts such as assignment instructions, rubrics, and examples of student work. The case study involves the implementation of p5.js in three different settings: an upper-level media studies course at a large research-intensive public university, an introductory-level media studies course at a small regional public university, and at a Digital Humanities Workshop open to public registration at a research university.
Increased concern over partisan media divides and arguments about fake news and the nature of truth have led some to call for a new Fairness Doctrine. This doctrine was a Federal Communications Commission (FCC) policy from 1949 to 1987 and required broadcasters to present contrasting views on controversial public issues. The short video above concisely summarizes the main arguments supporting a return of the Fairness Doctrine. However, it does not address some of the challenges associated with crafting such legislation.
This article further develops a methodological approach to media genealogy that extends the methods of media archaeology by adding the concept of processes of subjectivation and experimental and artistic interventions. This begins with an analysis of how the work of scholars such as Foucault, Stiegler, and Kittler aligns with media archaeology practices in terms of discourse networks. Next, I consider how Foucault’s lectures from the Collège de France can be used to extend current media archaeology practices into a genealogical method. After surveying how recent work in several disciplines might match up with such a genealogical approach, the work of Gilles Deleuze, Félix Guattari, and Anne Sauvagnargues is used to develop a genealogical method that emphasizes experimental processes of subjectivation.
The issue of Russian interference in the 2016 U.S. Presidential election has been widely debated by scholars and journalists. However, these works have not fully analyzed the ads that have been released by Facebook and the U.S. Congress. This project uses a case study to analyze the ads posted by the Russian-affiliated Internet Research Agency, considering the quantities of ads targeted to particular geographic locations, the frequency of targeting for unique keywords, and the reach and impressions of each of the ads. Further, these results are compared to results from best practices in traditional social media campaigns as a way to better understand the goals and potential impacts of the IRA ads. In con- clusion, the project, by analyzing the full set of IRA ads, sheds new light on the way false information narratives were leveraged by the Russian-linked IRA.
In this lesson, you’ll read some of the history of the telegraph and how it was shaped by and shaped society and culture. Then you’ll modify and print your own telegraph, before learning how to encode and send messages!
This chapter introduces a code/art approach to data visualization. Though coding has received increasingly greater amounts of attention within the field of Digital Humanities, it has primarily focused on more traditional types of visualizations such as charts and graphs. However, the iteration that is possible through generative design affords more artistic approaches. Casey Reas and Chandler McWilliams1 claim that particular programming languages afford specific opportunities. In much the same way that a carpenter would select particular tools and a particu- lar wood depending on the project, programming languages require sim- ilar careful selection. I use p5.js as the language of choice for this chapter because it is a free, open source language specifically designed for begin- ners, artists, and educators. It is a very accessible language that allows one to quickly begin creating dynamic content on the screen, even with very little previous programming experience.
How can I make the theoretical critique at the heart of the Introduction to Science, Technology, and Society course more tangible to my students?
This was my driving question as I began developing the syllabus for my STS 214 course at North Carolina State University. One of the assignments previously used in the course was an activity based around inventing a new technology. With this assignment, students worked together in groups to compose a simplified patent application. In 2013, NCSU libraries launched a new Makerspace with the opening of the Hunt Librarybranch. I had been brainstorming ways to integrate makerspace tools into my research and teaching; when I connected the availability of these tools with the possibilities of expanding the ‘inventing a technology’ assignment, the spark of excitement was immediate. The assignment was expanded so that students would invent new technologies using critical making tools such as micro-controllers, 3D printing, and augmented reality.
Mark Andrejevic, associate professor at Pomona College, and J. J. Sylvia IV, PhD student in the Communication Rhetoric and Digital Media Program at North Carolina State University, discuss the impact of the neo-materialist turn for media studies and the importance of critiquing surveillance through the theoretical framework of power in addition to that of privacy. Although the decline of symbolic efficiency, brought on at least in part by the rise of big data, seems to disrupt the link that Michel Foucault draws between power and knowledge, Andrejevic considers possibilities for reimagining the knowledge structures associated with big data’s infrastructure.
The ability of both people and organizations to leverage big data in new ways has rendered the traditional ethical frameworks for dealing with issues of privacy and commodification ineffective and archaic. The leveraging of such data raises new questions related to the power generated for businesses through the big data divide—the gap separating those who have access to big data and those who do not. Although the ethical issues related to big data have historical roots in commodification, we have the opportunity to embrace a new ethical framework for this age. Rather than focusing on privacy issues, big data can be better understood through the issue of power discrepancies created by the big data gap. One ethical aspect of this shift is seeking more emancipatory and affirmative uses of big data.
This time I’d like to share some visualizations based on publication years and citations of the scholars working in my area.
I’ve recently been researching and attempting to visualize my field. I’ll share a little background about my program and then the first part of what this visualization process looks like.
My Ph.D. Program–Communication, Rhetoric, and Digital Media–is an interdisciplinary program between the Communication and English departments at North Carolina State University. One of the final courses that we take, before moving on to comprehensive exams and the dissertation process, is aimed at helping us develop a sense of the field in which we’ll be working. This process has taught me that it’s particularly important to do this for those of us who will be working in an interdisciplinary field.
During the recent discussions about new models, methods, and media for the dissertation, I was also taking part in a course on Technologies and Pedagogies in the Communication Arts. During the course, taught by Dr. Deanna Dannels, we were challenged to re-interpret our teaching philosophy through the MaKey MaKey.
The “selfie,” a photograph taken of and by the same person, is a surprisingly malleable genre. Selfies can be taken of one person or of groups, at different angles, in different environments. The photographer-subject can be clothed, intending to showcase their OOTD (“outfit of the day”) or nude, aiming to entice romantic partners. The filters offered by popular platforms like Instagram can make a selfie appear as if it was taken forty years ago. The image geotagging feature of most cell phone cameras even allow users and viewers to use selfies to track the subjects’ daily whereabouts. There are also “selfie” offshoots: “belfies” are of photographer-subjects’ derrieres and the primary subject of “lelfies” are legs. In recent years, the selfie has become something more than a means to capture a look or moment; selfies, in all their forms, have been deployed for a variety of creative and critical purposes.
This forum takes up the hows and whys of selfie creation and circulation, paying special attention to the ways selfies act as a means of asserting agency in a variety of different contexts. Our hope is to combine perspectives on gender, sexuality, and surveillance as well as historical selfie precursors and the use of selfies in the classroom into one concentrated, scholarly forum. In our minds, the benefit of this forum over a scholarly article is that it can showcase the many ways the purposes and functions of selfies clash and create new configurations of creativity and power.
The Quantified Self (QS) has been the topic of much discussion recently in tandem with the development of consumer tracking applications and services. QS is a global network of individuals who voluntarily track various aspects of their bodies and lives, most often with digital and wearable technologies. If an aspect of the self can be counted, it’s probably been tracked by a member of the QS community. The motivation is self knowledge and the means is numerical data.
QS technologies include smartphone lifelogging apps, health and fitness trackers like Fitbit and Apple Healthkit, EEG devices, home biomarker testing kits and quite often, spreadsheets, among other things. Typical QS projects track steps, nutrition, mood and sleep. However, a rare project will surface now and then that interrogates such things like how often a self tracker’s values were exercised on a daily basis, the extent of a person’s material consumption, or even conversations and things heard over a decade, in the form of a searchable database!
In this HASTAC forum, we explore a community at the intersection of posthuman and transhumanist futures, as well as contemporary debates around digital health, surveillance and self governance. Through the forum, we hope to tackle some of the tough questions and challenges facing the quantified self community, including the politics of self-surveillance, the notions of data, identity, and agency inherent in QS practices, and its efforts towards subverting institutionalized knowledge production and reforming institutionalized medicine.
In this chapter we explore the possibility of meeting one’s self through time travel as a metaphor of the digital footprint one leaves through his or her use of social media. Nietzsche says that one is always a different person. Whether or not we accept that in a literal, ontological sense, we can all agree that we as adults tend to think about the world differently than we did when we were children. Now, thanks to sites like Facebook, Twitter, and Blogger, many of the thoughts and feelings of our former selves are easily captured and stored indefinitely on the Internet, which allows us to go back and glimpse versions of our former selves. Do these resources allow me to go back and experience myself, not as myself but as Other?
For philosopher Gilles Deleuze, the experience of the Other is an expression of a virtual possible world which allows one to see another side to the events that she lives. Learning, for Deleuze, requires a shock, and our encounters with Others can potentially offer just such a shock. In order to truly learn, though, we must not imitate the Other, but instead enter an assemblage with Other, bringing together two possible ways of expressing the world. Along with Deleuze, we explore the concept of Other through several existentialist philosophers.
In addition to examples of the Doctor meeting other versions of himself, this type of assemblage and learning is demonstrated by The Girl Who Waited. The Girl meets Amy, an earlier version of herself. In her own past she refused to help herself be rescued by the Doctor and Rory, but through the creation of an assemblage of Amy and The Girl and the shock of this meeting, The Girl becomes worthy of the events happening and makes the ethical decision to help rescue Amy.
Though we often think that we each become wiser as we age, a Deleuzian perspective on the Self as Other allows us to ask the question: what might we learn through confronting our younger selves as Other, either through time travel or social media?
Engage your audience visually with stunning Prezi presentation designs and be the envy of your colleagues who use PowerPoint with this book and ebook
If you use Prezi in business and want to take your presentations to the next level, or if you want to become the office Prezi master, this book is for you.
Prezi is a tool for delivering presentations in a linear or non-linear format. This cloud-based software enables users to structure presentations on an infinite canvas in a way that is more engaging and visually stimulating to the audience.
This book covers all of the technical elements of the software, whilst also looking at the practicalities of using Prezi in a business environment. It teaches the reader how to think for Prezi, and approach their design in the best way. This is an essential resource for people who want to use Prezi seriously. Apart from covering best practices for inserting images, sound, and video, this book also covers topics for business users such as collaborating and sharing Prezis online, using Prezi at a meeting to brainstorm with overseas colleagues, and how to “Prezify” PowerPoint or Keynote slides. This book will escalate you from Prezi user to Prezi master with ease.
In the fall of 2014, I taught an Introduction to Science, Technology, and Society course (see syllabus) that incorporated gamification as a way to teach critical making as a kinesthetic practice to help elucidate the scientific and technologic theory in our textbook.
This paper argues that by linking social media assignments to particular levels of Blooms Taxonomy, instructors can more easily and straightforwardly assess assignments. Much confusion exists over how to best incorporate these tools, and further, how to properly assess student performance related to social media. Often social media is used simply as an additional and optional channel of communication, rather than as an inherent part of a graded assignment, due in part to the difficulties of assessment. Using social media effectively and collaboratively is an important aspect of literacy in the 21st century; it is therefore important to move beyond merely incorporating the tools, but also assessing the use of the tools.
Historically, science education has been inquiry-based, focusing on learning through questioning and experimenting. This emphasis on inquiry-based science was seen in the rise of Western philosophy through the Pre-Socratics such as Thales and emphasized in the work of later polymaths such as Gottfried Leibniz and Francis Bacon. Modern American science education, particularly since the passing of No Child Left Behind, has been focused instead on fact-based education. From an educational perspective, the fields of both science and philosophy have faced criticism as the U.S. falls behind in world education, particularly in the STEM (Science, Technology, Engineering, Mathematics) areas.
The authors of this presentation argue that science and philosophy education are both at an important juncture where innovation is needed, and improvements could be made by allowing each to learn from the strengths of the other. The next generation of science standards being released offer a step in the right direction, featuring 8 main ideas based heavily on inquiry. Through a consideration of these standards and a historical understanding of science and philosophy, we suggest a path forward for both scientific and philosophical education.
The rising prominence of positive psychology has increased focus on the scientific study of happiness, ranging from social science to neuroscience and leading to a voluminous variety of publications explaining how to increase happiness. Only recently has this idea of the scientific study of happiness been challenged, most notably through works such as Against Happiness by Eric Wilson and The Antidote: Happiness for People Who Can’t Stand Positive Thinking by Oliver Burkeman. Although differing in thesis, both books sound the common alarm that the Western focus on happiness has become excessive to the point of being detrimental.
I argue that these contradictory positions can be reconciled by introducing philosophic reflection. This presentation will consider how the aim of positive psychology could be improved by broadening the concept of happiness to the good life. Much of the research of positive psychology seems to imply that the achievement of happiness simply is the good life. Considered philosophically, particularly through the lens of Aristotelian virtue ethics, happiness, though important, is only a portion of the good life. The combination of the scientific study of happiness and the philosophic pursuit of the good life can lead to a richer understanding of modern life and the quest for happiness and the good life.
Stephen Hawking’s recently released “The Grand Design” claims on the first page that philosophy is dead because it hasn’t kept up with the sciences. Several strategies for responding to this claim have been taken, ranging from ceding that the branch of philosophy known as metaphysics may be dead to claiming that doing any science at all requires a philosophic framework.
After briefly discussing the historical relationship between philosophy and science, I will argue that the current relationship is somewhat more complex and subtle.
First, in many fields scientists and philosophers are working together closely on contemporary issues such as understanding the concepts of ‘species’ in biology or ‘consciousness’ in neuroscience. I will argue that this sci-phi collaboration is beneficial for both science and philosophy, and has application to several branches of philosophy including ethics.
Second, I argue that science can, in a meaningful way, be equated to philosophy. Many scientists claim that they have no need for philosophy in their day-to-day work. This is likely true for the many scientists who are doing the work that Thomas Kuhn would call “puzzle solving.” However, scientific work that is pushing boundaries or asking questions that might lead to a shift in paradigm resembles the traditional work of metaphysics, in that what changes is not the evidence of experimental results, but rather the interpretation of the very same results. If Hawking concludes that philosophy is dead, it is only because he is firmly entrenched within a particular paradigm.
A/B and multivariate website optimization may not seem ethically problematic at first blush; however, in this chapter I will consider some of the less obvious elements that have been tested, such as header color, button design, and the style of tabs used for linking to product details. A/B and multivariate testing has shown that these seemingly insignificant changes can increase average order value and decrease abandoned shopping carts, among other results. I will consider these tests through the lens of the major ethical systems of utilitarianism, Kant’s respect for person’s principle, and virtue ethics, using specific case studies and examples of testing results. I conclude that this type of practice is likely ethically problematic in many uses, as understood through all three ethical systems. Along the way I will be careful to demonstrate how the manipulation resulting from A/B and multivariate testing is different and more problematic than that of advertising in general.
Doctor Who is a renegade among Time Lords because he involves himself in the affairs of the universe despite this being prohibited by Time Lord policy. Although his personality can differ somewhat from one incarnation to another, the Doctor is essentially a compassionate and caring being – he cares for the particular individuals he meets as well as the universe as a whole. It is precisely this pension for caring which leads the Doctor to involving himself in the universe – and undoubtedly he makes the universe a better place for it.
The ethics of caring, a relatively new philosophic development, is a system spearheaded by feminists who believe that traditional ethical systems are male-biased and don’t account for the way women experience the world. Of course, most who support a feminist ethics don’t believe the differences this system reflect can be split right down the gender line – it’s simply an easy generalization to make. The Doctor, then, is all the more interesting because he, as a man, represents an ideal example of the ethics of caring.
Mojoworld can be understood as a metaphor for the very real television culture of the past two or three decades. Mojo’s tireless efforts to keep his society from boredom mirror the television industry’s push for fresh and exciting programming each new season. These fresh and exciting ideas have increasingly become reality shows, many of which are quite similar to Mojo’s gladiator-style contests. Sure, the comparison between Mojo and our own television producers is not a perfect metaphor, but the similarities are interesting.
It’s very easy to dismiss Mojo as a villain confined to the pages of a comic book because in the real world, things are always a bit more complex and a bit less certain. Television stations are rarely vilified for seeking to increase their ratings as Mojo is, but like any other television producer, Mojo produces entertainment that people want. In the X-Men series episode “Mojovision,” Mojo makes it clear that the audience doesn’t want to see things like peace, or freedom, or good government. They want blood and guts and love and hate. In the Mojoverse, good ratings are generated by showing the audience what they want to see. Mojo simply produces whatever will satisfy that desire, and his ratings and power increase the more he does so.
If the audience didn’t want to see blood and guts and love and hate, then they probably wouldn’t watch those programs, and Mojo’s ratings would decrease. In other words, Mojo would have absolutely no incentive to produce those shows. Yet both in Mojo’s universe and in our own, people continue to watch these types of shows, so studios continue to produce them. Mojo’s defense of himself is that he, the same as our own television studios, is just innocently serving a public which is already immoral.
Qualitative probability has emerged as an important concept for fields as varied as medicine and artificial intelligence. In technical fields such as these, it is often not possible to assign a real or accurate numerical degree of belief to a particular proposition as required by the Bayesian system, yet decisions relying on the epistemic warrant of the statement must be made, despite the fact that certainty does not exist. The benefits of a qualitative system are considered, with particular emphasis on the Lockean Thesis. The Lockean Thesis is found well developed, but lacking any method of determining when or how one is epistemically warranted in a particular belief. Several solutions are considered, but ultimately virtuoso epistemology is briefly explained and suggested as an addition to the Lockean Thesis in order to overcome this difficulty.
The school of logical probability has generally been considered dead, however J. Franklin believes he has offered an analysis which will resurrect the position. This paper will argue against the assertions Franklin makes for that resurrection. Specifically two important counter-arguments are presented. In response to Franklin’s assertion that some priors have no weight, but that others can be assigned weight based on a statistical syllogism and that this method is only available to the logical probabilist, it is argued that this formulation is based on the concept of frequency probability and thus does nothing to further the resurrection of logical probability. Second, in response to Franklin’s assertion that background information for determining probability is ubiquitous and based on the conceptual framework, it is argued that this understanding would lead to a system where anything is either logically possible (1), or not (0), and that this is not useful for the decision making processes involved in science. Finally, I will suggest a possible direction for logical probability to take if the school is not to remain dead.
The popularity of BtVS is interesting because it goes against so many of the ideas present in traditional media. Is the popularity of the show due at least in some part to the postmodern emphasis that can be seen throughout? While the show may not be able to escape the simulacra, it can at least offer us a separate path, a different line of thinking from that of the rest of the media. Perhaps it has gained such attention from fans and philosophers alike because it spreads a message so different from the ones we are bombarded with by the rest of the media.
Presumably the media began presenting society with the ideas of heroes and true love and the binary opposition of good versus evil because that is what society wanted; that is what made the consumer purchase the media content. Yet, the popularity of BtVS seems to suggest that at least a very large portion of the American culture is willing to spend that money on ideas that have not been so traditionally popular in the media. Have postmodern ideas trickled down and been integrated into the American culture enough so that a show so seemingly different is not only successful but wildly popular? The widespread popularity of BtVS would seem to suggest just that. While the popularity of this show may not end the simulacra, it at least creates another framework of ideologies within that simulacra from which the audience may see life – a very postmodern effect in its own right.