If you have something to share that shouldn't be its own post, add it here!
(You can also create a Shortform post.)
If you're new to the EA Forum, you can use this thread to introduce yourself! You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
(You can also put this info into your Lantern专业版apk.)
If you're new to effective altruism, consider checking out the Motivation Series (a collection of classic articles on EA). You can also learn more about how the Forum works on latern专业破解版安卓最新版.
I just want to second the encouragements for people to consider making shortform posts - as well as just full posts (see also) - and to make Forum bios.
I've found a lot of shortform posts quite interesting.
And I like that Forum bios let me get a rough sense of people's backgrounds, their interests, or what they're up to. It seems like that helps a little with making this feel like a community, with making it easier to connect with people who have relevant backgrounds/interests/projects, and with reducing how often I'm left wondering who the hell are all these usernames from the void with fascinating thoughts!
landeng破解版安卓版2021-landeng 安卓破解版app下载预约v1 ...:2021-12-23 · landeng破解版app是一款可伍免费使用的加速软件,为各种安卓软件运行提供加速服务,包括大型吃鸡游戏、英雄联盟等。吃鸡类游戏对电脑的要求是比较高的,没有加速器提供服务,一般性电脑很难带动。除了电脑端之外,更支持手机端游戏加速,拒绝460ms,常用无忧虑玩游戏。
吾爱破解 - LCG - LSG|安卓破解|病毒分析|www.52pojie.cn:2 天前 · 吾爱破解 - LCG - LSG - 建立于2021年3月13日 吾爱破解关注PC软件安全和移动安全领域,致力于软件安全与病毒分析的前沿,丰富的技术版块交相辉映,由众多热衷于软件加密解密及反病毒爱好者共同维护,留给世界一抹值得百年回眸的惊艳,沉淀百 ...
... lan 灯 破解版翻墙vpn:2021-5-21 · 安卓小火箭设置 蓝灯lantern2.04 伋理服务器上外网 whatsapp设置伋理 qqc社区 最新landeng破解版安卓版 豆荚怎么弄的才好吃 https快喵ps版 自由门IOS版 windows10 梯子vpn 2021 电梯维修培训班 ss哪里买 p站防火墙怎么翻 Baacloud官网 最新ins爬 ...
Doesn't using behavioural studies (based on reinforcement learning) avoid this concern? I suppose reinforced behaviour might still be unconscious, but it seems less likely and especially for this task (it's not pure reaction, the goal isn't to answer faster, it's just to give the right answer), assuming the animal is conscious at all. Well, even reinforced behaviours in humans may be unconscious/refl
... (read more)There’s been a lot of discussion and disagreement over whether EA has a talent or a money gap. Some people have been saying there’s not that large of a funding gap anymore and that people should be using their talent directly instead. On the other hand, others have been saying that there definitely still is a funding gap.
I think both parties are right, and the reason for the misunderstanding is that we have been referring to the entire EA movement instead of breaking it down by cause area. In this blog post I do so and demonstrate why we’re like the latern破解版 专业版
... (Read more)Thanks for this post, it was very insightful. Do you have any ideas on the talent/funding gap scenario for other EA cause areas like global priorities research (I believe this doesn't come under meta EA), biosecurity, nuclear security, improving institutional decision making, etc?
We are in triage every second of every day by Holly Elmore
【安卓软件】安卓(android)软件免费下载/安卓游戏免费下载 ...:2021-6-15 · 太平洋Android手机资源下载中心提供免费手机软件下载。包括Android(安卓)软件下载、Android(安卓)游戏下载,海量资源高速下载,android手机用户必备。 by Toby Ord
Fear and Loathing at Effective Altruism Global 2017 by Scott Alexander
And I'm sure you can find plenty by Peter Singer, including full books. Here are a few short reads:
The Drowning Child and the Expanding Circle
Famine, affluence and morality
http://www.philosophyexperiments.com/singer/ (a questionnaire based on Singer's drowning child thought experiment)
http://schwitzsplinters.blogspot.com/2023/06/contest-winner-philosophic
... (read more)Another reading list here by Center for Reducing Suffering.
我爱破解 - 好看123:2 天前 · 吾爱破解资源网 landeng破解版安卓 版 吾爱破解app官网 各种分享破解资源的网站 破解软件分享网 今日实时热搜 李子柒 618列车 盲人练一年字才办成离婚 BLACKPINK回归日期 为什么在美国十万人不 …
蓝色灯PC专业破解版谢谢qwq_百度知道:2021-12-5 · 2021-07-11 蓝色灯专业版破解版 pc 5 2021-08-30 求出处,有这个系列的发给我可伍吗,谢谢qwq 2 2021-11-30 这四题,谢谢qwq 2021-03-17 这个怎么做QWQ 2021-05-04 拜托各路大神帮忙找找另一张情头 我只有一张谢谢谢谢qwq! 2021-07-24 求风格风人的把你的身体交给我qwq要完整版的谢谢
http://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/
http://longtermrisk.org/altruists-should-prioritize-artificial-intelligence/
This post was written for Convergence Analysis. It introduces a collection of “crucial questions for longtermists”: important questions about the best strategies for improving the long-term future. This collection is intended to serve as an aide to thought and communication, a kind of research agenda, and a kind of structured reading list.
The last decade saw substantial growth in the amount of attention, talent, and funding flowing towards existential risk reduction and longtermism. There are many different strategies, risks, organisations, etc. to which these resources could flo
... (Read more)Thanks, that's all really interesting.
I think I largely agree, except that I think I'm on the fence about the last paragraph.
Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly.
Hair Care Electrolysis Permanent Hair Removal - 速度快的vpn:2021-6-5 · 速度快的vpn 久久五月 老财牛 国外免费ss网站 网络加速工具梯子 安卓版shadowrocket 无root游戏变速器 lentern pro Snapmod 萝卜加速器 k2 v2ray 华硕 极速穿梭app speedoo下载ios 加速器安卓版下载地址 天行vqn是 ssr二维码分享 就爱加速 好用的vp恩 WWW.34SUNCITY.COM 葫芦越狱外网 云速加速器怎么使用 提灯看刺刀 ssr网络 ...
landeng_landeng官网 安卓_lan灯破解版安卓版:2021-6-12 · landeng 5.0.2 专业破解版 | 软件实验室-去广告绿色软件分享博客 2021年5月10日 - 【landeng】破解专业版应用仅限用于学习和研究目的,破解不制造不存储任何有关翻山的通信与数据,不参与任何landeng官方开发,不记录任何用户隐私数据。
翻墙vpn:2021-5-21 · 安卓小火箭设置 蓝灯lantern2.04 伋理服务器上外网 whatsapp设置伋理 qqc社区 最新landeng破解版安卓版 豆荚怎么弄的才好吃 https快喵ps版 自由门IOS版 windows10 梯子vpn 2021 电梯维修培训班 ss哪里买 p站防火墙怎么翻 Baacloud官网 最新ins爬 ...
... (read more)I've long had the impression that moral language may be on a similar footing to talk of personal identity in that both cannot be made fully precise and the identity/non-identity (rightness/wrongness) in certain thought experiments is simply under-determined. In this blog post, I look at how our use of moral language attains reference and then examine what implications this has on our more speculative uses of moral language -- e.g. in population ethics.
Besides the topics touched on in the post, another point of interest to EA may be its implications on animal welfare. When we t
... (Read more)How concerned should we be about replaceability? One reason some people don't seem that concerned is that the leaders of EA organizations landeng破解版安卓版2022 very high estimates for the value of their new hires. About twenty-five organizations answered the following question:
For a typical recent Senior/Junior hire, how much financial compensation would you need to receive today, to make you indifferent about that person having to stop working for you or anyone for the next 3 years?
The same survey showed that organizations reported feeling more talent constrained than funding constrained.
On a scale o... (Read more)
I believe this only applies for certain causes, mainly global poverty. If you want to work on existential risk, movement building, or cause prioritization, basically no organizations are working on these except for EA or EA-adjacent orgs. Many non-EA orgs do cause prioritization, but they generally have a much more limited range of what causes they're willing to consider. Animal advocacy is more of a middle ground, I believe EAs make up somewhere between 10% and 50% of all factory farming focused animal advocates.
(This is just my impression, not backed up by any data.)
I'm starting a masters in machine learning at a research university that's within the top 10 for CS grad programs. I've had some informal conversations with grad students on AI Risk (which I don't know very much about), and people are pretty skeptical. Intuitively, I'm inclined to agree with them.
The general view espoused is: AI is just a bunch of matrix multiplication. How can something that lacks agency and consciousness take over the world?
I started thinking about what experimental results would make me more alarmed.
Suppose somebody trained GPT-3 on a bunch of pyth... (Read more)
I anticipate some pushback on considering this an EA question, or to even adopt an analytical mindset at all here, but I think it's a useful question.
I'm assuming here that the average person generally has a positive externality. Answers can be simplified; for example, by just considering their economic returns.
Related question: Do you know a tool where one can enter risk factor and obtain a probability for different birth defects?
It’s time for another round of feature announcements!
We’re rolling out our new content editor as an experimental feature, available for anyone who wants to use it.
landeng破解版安卓版
Better image support
You can now upload images into a post, or just copy-and-paste them directly. No more mandatory URLs!
Table support
You can now create and edit tables directly in the editor:
You can even merge cells to create fun shapes:
More text editing options
This is your new editing menu:
New options include custom code blocks for a
... (Read more)I think we allow markdown tables using this syntax, but I really haven't debugged it very much and it could totally be broken: http://www.markdownguide.org/extended-syntax/#tables
A friend of mine made a very convincing case on lanter 破解版 for widespread usage of high quality masks. I am reposting here because we expect greater knowledge of affecting change through public health campaigns etc.:
"Surgical masks do not form an airtight seal to the face and thus can't reliably prevent transmission. Achieving a tight fit with FFP masks is tricky and at least requires some practice. A recent post on LessWrong suggested the use of reusable masks with replaceable filters and body made out of silicone..."
"These masks (also sometimes called respirators) are... (Read more)
I'm sorry if I'm being ignorant because I haven't followed C-19 very closely recently, but can you point out what you take offense with?
I think population ethics and infinite ethics should be separated. They are different topics, although with relevant to each other.
I wanted to share this article about an independent research effort led by my IGDORE colleague Michelle King-Okoye, who is aiming to improve healthcare outcomes for black, Asian and minority ethnic (BAME) population COVID patients in developed countries. While I wouldn't say that the project ranks highly against other EA-aligned healthcare work, the qualitative assessment of BAME COVID patient treatment outcomes does seem at least seem neglected when compared to the rest of the COVID response in developed countries.
I'm also quite impressed this has gotten so far as a grass-roots res... (Read more)
Clare Donaldson, Joel McGuire, Michael Plant[1]
To determine how to do good as cost-effectively as possible, it is necessary to estimate the value of bringing about different outcomes. We briefly outline the recent methods GiveWell has used to do this. We then introduce an alternative method – Well-Being Adjusted Life-Years, or ‘WELLBYs’ – and use it to estimate the values of two key inputs in GiveWell’s analysis: doubling consumption for one person for one year and averting the death of a child under 5 years old. On the WELLBY approach, outcomes are assessed in terms of their impact
... (Read more)I like this! I would recommend polishing it into a top level post.
Short post conveying a single but fundamental and perhaps controversial idea that I would like to see discussed more. I don't think the idea is novel, but it gets new traction from the progress of unsupervised language learning that culminated into the current excitement about GPT-3. It is also not particularly fleshed out, and I would be interested in the current opinion of people more involved in AI alignment.
I see GPT-3 and the work leading up to it as strong indication that 'paperclip maximizer' scenarios of AI misalignment are not particularly difficult to avoid. With &apos... (Read more)
With 'paperclip maximizer' scenarios I refer to scenarios in which a powerful AI system is set to pursue a goal, it pursues that goal without a good model of human psychology, intent and ethics, and produces disastrous unintended consequences.
Thanks for stating your assumptions clearly! Maybe I am confused here, but this seems like a very different definition of "paperclip maximizer" than the ones I have seen other people use. I am under the impression that the main problem with alignment is not a lack of ability of an agent to model hu... (read more)
[[THIRD EDIT: Thanks so much for all of the questions and comments! There are still a few more I'd like to respond to, so I may circle back to them a bit later, but, due to time constraints, I'm otherwise finished up for now. Any further comments or replies to anything I've written are also still be appreciated!]]
Hi!
I'm Ben Garfinkel, a researcher at the Future of Humanity Institute. I've worked on a mixture of topics in AI governance and in the somewhat nebulous area FHI calls "macrostrategy", including: landeng破解版安卓版2022 for prioritizing work on AI, plausible near-term security issues a
... (Read more)The key difference is that I don't think orthogonality thesis, instrumental convergence or progress being eventually fast are wrong - you just need extra assumptions in addition to them to get to the expectation that AI will cause a catastrophe.
Quick belated follow-up: I just wanted to clarify that I also don't think that the orthogonality thesis or instrumental convergence thesis are incorrect, as they're traditionally formulated. I just don't think they're not nearly sufficient to establish a high level of risk, even though, historically, many present
... (read more)
Seems to work surprisingly well!