Discussion:
Beazley's Problem
(too old to reply)
Stefan Ram
2024-09-21 12:15:37 UTC
Permalink
I recently caught this vid "The Problem with The Problem"
featuring David Beazley. In it, he dropped this problem
that went something like:

|Picture this: there's a guy who owns a movie theater in Santa Monica,
|and he's got the freedom to set ticket prices however he wants.
|
|But here's the catch: the higher he sets the price, the fewer
|people show up. So, he recently did a little experiment to
|figure out exactly how ticket prices affect attendance.
|
|Turns out, when he charges $5.00 per ticket, about 120 folks
|come to watch a movie. (=1)
|
|But if he knocks the price down by just 10 cents, an extra 15
|people show up. (=2)
|
|Now, here's where it gets tricky. More people means more
|costs. Running each show sets him back $180, and every person
|who walks through the door adds another four cents to his
|expenses. (=3)
|
|What he's trying to figure out is how all these numbers play
|together so he can set the perfect ticket price that brings in
|the most cash.

David basically said, "The owner wants to find out how to maximize
his profit."

He didn't spell out the exact task, but mentioned he throws
this curveball in some of his courses. And I'm betting my
bottom dollar they're Python programming classes!

So I hit pause on the video and thought to myself: "How would I
tackle coding this bad boy?"

Before you keep reading, why don't you take a crack at it yourself?

Alright, so here's how I approached it: We know that when the
price x is 5 bucks, the number of people n is 120 (^1).

n( 5 )= 120

We know that for every i times 0.1 dollar price hike, (-i)*15 more
people n show up (^2).

n( x + 0.1*i )= n( x )- 15*i

The overhead c for an event is 180 smackers plus 0.04 bucks per
head (^3).

c = 180 + n*0.04

So the profit p for an event with n people, each shelling out
x dollars, comes out to turnover minus costs:

p( x )= n( x )*x - c.

A change in attendance that's exactly proportional to the
price means the number of people depends on the price in an
affine-linear way. By plugging in the two known relationships
"n( 5 )= 120" and "n( x + 0.1*i )= n( x )- 15*i", we can
nail down the coefficients of the affine-linear function:

n( x )= 870 - 150*x.

The profit "n( x )*x - c" then becomes

p( x )= n( x )*x - c
= -150*x*x +( 870 + 150*0.04 )x - 180 - 870*0.04.

The derivative of the profit with respect to price x is

p'( x )= -2*150*x +( 870 + 150*0.04 ).

By setting the derivative to zero, you find the maximum at

x = 87/30 + 0.02

if my math isn't off the rails!

So, my approach to this programming challenge would be to
say that it's best to determine the maximum analytically,
and then the Python program boils down to:

print( "The maximum profit is at" )
print( 87/30 + 0.02 )
print( "viewers." )

Or you could install one of those fancy symbolic math libraries
for Python and let it handle all the manual transformations
I just walked through.

Now I'm on pins and needles to see where David Beazley's talk
goes from here!
Paul Rubin
2024-09-21 12:45:59 UTC
Permalink
Post by Stefan Ram
Alright, so here's how I approached it: We know that when the
price x is 5 bucks, the number of people n is 120 (^1).
That assumption doesn't seem so good, but accepting it, your answer
looks right. Here is a pure numerical solution. Since the profit
function is quadratic, the Newton iteration converges immediately.
================================================================
def cost(n): return 180+.04*n # cost to show to n viewers
def revenue(price,n): return price*n # amount collected from them
def people(price): return 120.+(price-5)*(-15./.1) # number who will attend
def profit(price):
n = people(price)
return revenue(price,n) - cost(n)

def ddx(f,x,h=0.001): return (f(x+h)-f(x-h))/(2*h) # numerical derivative
def newton(f,x0): return x0 - f(x0)/ddx(f,x0) # Newton-Raphson iteration

def dprofit(price): return ddx(profit, price) # derivative of profit

x = 5.
for i in range(3):
print(f'{i} {x:.4f} {profit(x):.1f} {dprofit(x):.1f}')
x = newton(dprofit,x)
Stefan Ram
2024-09-21 14:18:07 UTC
Permalink
Post by Paul Rubin
def ddx(f,x,h=0.001): return (f(x+h)-f(x-h))/(2*h) # numerical derivative
def newton(f,x0): return x0 - f(x0)/ddx(f,x0) # Newton-Raphson iteration
It's hella rad to see you bust out those "next-level math tricks"
with just a single line each!
Paul Rubin
2024-09-21 20:19:50 UTC
Permalink
Post by Stefan Ram
It's hella rad to see you bust out those "next-level math tricks"
with just a single line each!
You might like:

https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf

The numerics stuff starts on page 9.
Annada Behera
2024-09-23 07:44:00 UTC
Permalink
The "next-level math trick" Newton-Raphson has nothing to do with
functional programming. I have written solvers in purely iterative
style. As far as I know, Newton-Raphson is the opposite of functional
programming as you iteratively solve for the root. Functional programming
is stateless where you are not allowed to store any state (current best
guess root).

-----Original Message-----
From: Paul Rubin <***@nospam.invalid>
Subject: Re: Beazley's Problem
Date: 09/22/2024 01:49:50 AM
Newsgroups: comp.lang.python
  It's hella rad to see you bust out those "next-level math tricks"
  with just a single line each!
You might like:

https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf

The numerics stuff starts on page 9.
Stefan Ram
2024-09-23 12:26:38 UTC
Permalink
Post by Annada Behera
The "next-level math trick" Newton-Raphson has nothing to do with
functional programming.
Nobody up the thread was claiming it was functional. And you can
totally implement anything in an imperative or functional style.

from typing import Callable

def newton_raphson(
f: Callable[[float], float],
f_prime: Callable[[float], float],
x0: float,
epsilon: float = 1e-7,
max_iterations: int = 100
) -> float:
def recurse(x: float, iteration: int) -> float:
if iteration > max_iterations:
raise ValueError("Maximum iterations reached. The method may not converge.")

fx: float = f(x)
if abs(fx) < epsilon:
return x

x_next: float = x - fx / f_prime(x)
return recurse(x_next, iteration + 1)

return recurse(x0, 0)

# Example application: find a root of f(x) = x^2 - 4

# Define the function and its derivative
def f(x: float) -> float:
return x**2 - 4

def f_prime(x: float) -> float:
return 2*x

# Initial guess
x0: float = 1.0

try:
root: float = newton_raphson(f, f_prime, x0)
print(f"The root is approximately: {root}")
print(f"f({root}) = {f(root)}")
except ValueError as e:
print(f"Error: {e}")
Lawrence D'Oliveiro
2024-09-23 22:44:11 UTC
Permalink
And you can totally implement anything in an imperative or functional
style.
Except for I/O, which does not fit a pure-functional programming style.
Even “pure-functional” languages have to sacrifice the “pure” part when it
comes to I/O.
Paul Rubin
2024-09-24 00:22:27 UTC
Permalink
Post by Stefan Ram
Nobody up the thread was claiming it was functional. And you can
totally implement anything in an imperative or functional style.
Yeah the confusion was because I posted a link to "Why FP Matters",
which discusses these sorts of numerical hacks.
Post by Stefan Ram
return 2*x
You might enjoy implementing that with automatic differentiation (not to
be confused with symbolic differentiation) instead.

http://blog.sigfpe.com/2005/07/automatic-differentiation.html
Annada Behera
2024-09-24 08:25:57 UTC
Permalink
-----Original Message-----
From: Paul Rubin <***@nospam.invalid>
Subject: Re: Beazley's Problem
Date: 09/24/2024 05:52:27 AM
Newsgroups: comp.lang.python
Post by Paul Rubin
    return 2*x
You might enjoy implementing that with automatic differentiation (not
to be confused with symbolic differentiation) instead.
http://blog.sigfpe.com/2005/07/automatic-differentiation.html
Before I knew automatic differentiation, I thought neural networks
backpropagation was magic. Although coding up backward mode autodiff is
little trickier than forward mode autodiff.

(a) Forward-mode autodiff takes less space (just a dual component of
every input variable) but needs more time to compute. For any function:
f:R->R^m, forward mode can compute the derivates in O(m^0)=O(1) time,
but O(m) time for f:R^m->R.

(b) Reverse-mode autodiff requires you build a computation graph which
takes space but is faster. For function: f:R^m->R, they can run in
O(m^0)=O(1) time and vice versa ( O(m) time for f:R->R^m ).

Almost all neural network training these days use reverse-mode autodiff.
david k. combs
2024-11-10 20:48:31 UTC
Permalink
Off topic, but quick. Are you the paul rubin who I knew back in 70's in nyc, on E. 84th st?

If so, respond to me at ***@optonline.net. Now live in New Rochelle, but winter in
Sarasota, Fla. (like NOW). Phone: 941-954-2029. (Yeah 941 looks like mistyped 914 for
westchester, but 941 IS for sarasota...) Thanks David Combs (now old fart at 82!)
Paul Rubin
2024-11-10 21:55:01 UTC
Permalink
Post by david k. combs
Off topic, but quick. Are you the paul rubin who I knew back in 70's
in nyc, on E. 84th st?
Hey yeah, I'll email you! Way cool. You are still a young feller,
trust me. I spend a lot of time taking care of my mom who is way older
than you ;). I'm in the middle of something but will email today
or can call. Yes I remember New Rochelle :). Good to hear from you!
Stefan Ram
2024-09-26 12:51:13 UTC
Permalink
Post by Stefan Ram
totally implement anything in an imperative or functional style.
In functional programming, you don't redefine names. So,

|let i := 7

is still kosher with functional programming, while

|let i := 7
|let i := 8

is a no-go. Why am I bringing this up?

If you redefine a name in a Python module (since around 2022), like,

|i = 7
. . .
|i = 8

, you're putting the kibosh on a certain optimization for name lookup
and your program's going to drag. This means that sprinkling in a little
functional programming mojo can make your Python programs zip along!

This was laid out by Kevin Modzelewski in a talk back in 2022.

He dropped these nuggets for Python programs (for CPython, I take it)
that don't cramp modern optimizations:

- Don't reassign global variables.

- All objects of a class should have the same attributes
(names, not values; i.e., "obj.dict.keys()" shouldn't
be different between objects of the same class).

- Set the same attributes in the same order for all objects
of a class.

- Use slots.

- Don't change attributes of classes of objects.

- Don't bother trying to optimize attribute lookup for
method calls outside of loops anymore.
(Don't try to "cache" methods in variables.)
The optimizer will take care of this today better
than you could ever do.

What else puts the brakes on a program is using module "getattr"
methods and tracing or profiling.
Stefan Ram
2024-09-26 12:54:34 UTC
Permalink
Post by Stefan Ram
(names, not values; i.e., "obj.dict.keys()" shouldn't
obj.__dict__.keys()
Gilmeh Serda
2024-09-26 16:13:55 UTC
Permalink
Post by Stefan Ram
- Use slots.
...or you end up with sloths? ;)
--
Gilmeh

Why, every one as they like; as the good woman said when she kissed her
cow. -- Rabelais
Antoon Pardon
2024-10-06 20:19:10 UTC
Permalink
Post by Annada Behera
The "next-level math trick" Newton-Raphson has nothing to do with
functional programming. I have written solvers in purely iterative
style.
What is your point. Any problem solved in a functional style can
also be solved in a pure interative style. So you having written
something in an interative style doesn't contradict Newton-Raphson being
expressable in a functional style.
Post by Annada Behera
As far as I know, Newton-Raphson is the opposite of functional
programming as you iteratively solve for the root. Functional programming
is stateless where you are not allowed to store any state (current best
guess root).
That doesn't prevent you from passing state along as a parameter,
usualy in some helper function.
--
Antoon Pardon.
Loading...