Compare commits

...

10 Commits

Author SHA1 Message Date
af45d1a9c8 Remove sentence comparing quantity of outliers. 2025-04-24 01:17:55 -07:00
1570c4559f Simpler tense. 2025-04-24 01:09:26 -07:00
5e56263469 Editing. Better flow? 2025-04-24 01:09:09 -07:00
4b46a735cf Added dependent clause detailing base of logarithm 2025-04-24 00:30:21 -07:00
05dc8b41e1 Updated for 2 sigfigs on numbers. 2025-04-23 23:22:49 -07:00
02067139e6 Fix typo. Edit sentences. 2025-04-23 18:06:25 -07:00
259e19f2ac Added explanation 2025-04-23 08:24:01 -07:00
800712c63c First draft of README. Let's see how ti looks. 2025-04-23 07:28:15 -07:00
721a38435b Repurposed horizontal line to show the y-intercept value. 2025-04-23 07:16:48 -07:00
bdcba003fe Preparing notebook for submission.
Added business understanding Q&A.
Labeled outputs by color (regression attempt).
Some code clean up.
2025-04-23 07:00:38 -07:00
4 changed files with 222 additions and 73 deletions

101
README.md Normal file
View File

@@ -0,0 +1,101 @@
<!--Your Github repository must have the following contents:
A README.md file that communicates the libraries used, the motivation for the project, the files in the repository with a small description of each, a summary of the results of the analysis, and necessary acknowledgments.
Your code in a Jupyter notebook, with appropriate comments, analysis, and documentation.
You may also provide any other necessary documentation you find necessary.-->
# stacksurvey
**stacksurvey** is an exploration and analysis of data from StackOverflow's developer survey of 2024.
[https://survey.stackoverflow.co/2024/](https://survey.stackoverflow.co/2024/)
The motivation for project is satisfying a class assignment. Eventually, an interesting (enough) topic was discovered in the data set:
>What is the annual compensation (y) over years of experience (x) of deveopers who use a programming language from a specific country?
## Requirements
numpy pandas sklearn matplotlib seaborn
## Summary of Analysis
The models generated by the notebook become less reliable with years of experience greater than 10 or annual incomes greater than $200,000.
Each chart comes with two regression lines. Red is the default regression line that has not been tuned. The other is an attempt to better fit the data by either transforming or shifting x.
The transformation is typically
y = m * log(x) + b
where the base is a parameter.
Each model had different changes of base applied to the log function.
### C
![graph of c programmers](images/programmers-C-United-States-of-America.png)
+----------------------+
red regression line for C
coefficient = 1427.58
intercept = 103659.82
rmse = 26971.44
r2 score = 0.06
sample predictions:
[[125073.46117519]
[107942.54574181]
[109370.12202793]]
+----------------------+
+----------------------+
magenta regression line for C
coefficient = 11973.47
intercept = 54776.27
rmse = 21198.61
r2 score = 0.57
sample predictions:
[[132396.26294684]
[119937.35465744]
[ 64985.1549115 ]]
+----------------------+
For C programmers, a linear model fits well but not great having an r2 score of 0.57. Junior level positions earn roughly $54,776. Their income progresses $11,973 with each year of experience.
### Python
![graph of python programmers](images/programmers-Python-United-States-of-America.png)
+----------------------+
red regression line for Python
coefficient = 2573.62
intercept = 123479.15
rmse = 39759.45
r2 score = 0.34
sample predictions:
[[126052.77118246]
[174951.60602361]
[187819.7204555 ]]
+----------------------+
+----------------------+
cyan regression line for Python
coefficient = 10378.53
intercept = 82957.69
rmse = 42374.26
r2 score = 0.38
sample predictions:
[[139882.01866593]
[117229.55243376]
[137277.30441955]]
+----------------------+
For data scientists, analysts, or engineers, a linear model is a moderate fit at best as the r2 score is around 0.30. There appears to be divergence at the 10 year mark in their careers. This may be the result of their field (advertising, finance, bio/medical, and so on).
Entry or junior level professionals generally have an income of $82,957 or $123,479. Their annual income increases by $10,378 or $2573 each year.
## Acknowledgements
* "Udacity AI" (ChatGPT), the idea to transform x values to appropriate a linear regression into a logarithmic regression.

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@@ -91,44 +91,46 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"\n", "\n",
"from sklearn.linear_model import LinearRegression, LogisticRegression\n", "from sklearn.linear_model import LinearRegression\n",
"from sklearn.model_selection import train_test_split\n", "from sklearn.model_selection import train_test_split\n",
"from sklearn.metrics import root_mean_squared_error\n", "from sklearn.metrics import root_mean_squared_error, r2_score\n",
"from sklearn.model_selection import StratifiedShuffleSplit\n",
"import traceback\n", "import traceback\n",
"import numpy as np\n", "import numpy as np\n",
"\n", "\n",
"# still haven't come up with a name\n", "# still haven't come up with a name\n",
"class Foo:\n", "class Foo:\n",
" def __init__(self, dataset, language, jobs=None, n_rich_outliers=0, n_poor_outliers=0, country=\"United States of America\"):\n", " def __init__(self, dataset, language, jobs=None, \n",
" n_rich_outliers=0, n_poor_outliers=0, \n",
" country=\"United States of America\"):\n",
" self.devs = None\n", " self.devs = None\n",
" self.canvas = None\n", " self.canvas = None\n",
" self.language = language\n", " self.language = language\n",
" self.country = country\n", " self.country = country\n",
" # focus on people who have given ...\n", " # focus on people who have given ...\n",
" key = \"ConvertedCompYearly\"\n", " key_x = \"YearsCodePro\"\n",
" key2 = \"YearsCodePro\"\n", " key_y = \"ConvertedCompYearly\"\n",
" df = dataset.dropna(subset=[key, key2])\n", " df = dataset.dropna(subset=[key_x, key_y])\n",
" self.key = key\n", " self.key_x = key_x\n",
" self.key2 = key2\n", " self.key_y = key_y\n",
"\n", " \n",
" criteria = {\"MainBranch\":\"I am a developer by profession\"}\n", " qualifiers = {\n",
"\n", " \"MainBranch\":\"I am a developer by profession\",\n",
" #print(df[\"Country\"].unique)\n", " }\n",
" if country:\n", " if country:\n",
" criteria[\"Country\"] = country\n", " qualifiers[\"Country\"] = country\n",
" for k in criteria:\n", " for k in qualifiers:\n",
" df = df[df[k] == criteria[k] ] \n", " df = df[df[k] == qualifiers[k] ] \n",
"\n", "\n",
" # chatgpt tells me about filtering with multiple strings\n", " # chatgpt tells me about filtering with multiple strings\n",
" if jobs:\n", " if jobs:\n",
" df = df[df.isin(jobs).any(axis=1)]\n", " df = df[df.isin(jobs).any(axis=1)]\n",
"\n", "\n",
" devs = None\n", " devs = None\n",
" if len(language) > 1:\n", " if len(language) == 1 or language in [\"Python\", \"Java\"]:\n",
" devs = get_lang_devs(df, language)\n",
" else:\n",
" devs = get_c_devs(df, lang=language)\n", " devs = get_c_devs(df, lang=language)\n",
" else:\n",
" devs = get_lang_devs(df, language)\n",
" \n",
" replacement_dict = {\n", " replacement_dict = {\n",
" 'Less than 1 year': '0.5',\n", " 'Less than 1 year': '0.5',\n",
" 'More than 50 years': '51',\n", " 'More than 50 years': '51',\n",
@@ -136,77 +138,72 @@
"\n", "\n",
" # https://stackoverflow.com/questions/47443134/update-column-in-pandas-dataframe-without-warning\n", " # https://stackoverflow.com/questions/47443134/update-column-in-pandas-dataframe-without-warning\n",
" pd.options.mode.chained_assignment = None # default='warn'\n", " pd.options.mode.chained_assignment = None # default='warn'\n",
" new_column = devs[key2].replace(replacement_dict)\n", " new_column = devs[key_x].replace(replacement_dict)\n",
" devs[key2] = pd.to_numeric(new_column, errors='coerce')\n", " devs[key_x] = pd.to_numeric(new_column, errors='coerce')\n",
" pd.options.mode.chained_assignment = 'warn' # default='warn'\n", " pd.options.mode.chained_assignment = 'warn' # default='warn'\n",
" # print( devs[key2].unique() )\n", " # print( devs[key_x].unique() )\n",
" \n", " \n",
" indices = devs[key].nlargest(n_rich_outliers).index\n", " indices = devs[key_y].nlargest(n_rich_outliers).index\n",
" devs = devs.drop(indices)\n", " devs = devs.drop(indices)\n",
" indices = devs[key].nsmallest(n_poor_outliers).index\n", " indices = devs[key_y].nsmallest(n_poor_outliers).index\n",
" self.devs = devs.drop(indices)\n", " self.devs = devs.drop(indices)\n",
" del devs, new_column, criteria\n", " del devs, new_column\n",
" \n", " \n",
" def visualize(self, n_lowest=0, hue=\"Country\"): \n", " def visualize(self, hue=\"Country\", \n",
" palette=sb.color_palette() ): \n",
" self.canvas = plt.figure()\n", " self.canvas = plt.figure()\n",
" key = self.key\n", " key_x = self.key_x\n",
" key2 = self.key2\n", " key_y = self.key_y\n",
"\n", "\n",
" if n_lowest > 0:\n", " sb.scatterplot(data=self.devs, x=key_x, y=key_y, hue=hue, palette=palette)\n",
" # chatgpt draws my line\n",
" # Calculate the lowest nth point (for example, the 5th lowest value)\n",
" # iloc[-1] gets the last element from the n smallest\n",
" lowest_nth = self.devs[key].nsmallest(n_lowest).iloc[-1] \n",
" # Draw a horizontal line at the lowest nth point\n",
" # label=f'Lowest {n_poorest}th Point: {lowest_nth_value:.2f}'\n",
" plt.axhline(y=lowest_nth, color='purple', linestyle='--', label=\"y=%0.2f\" % lowest_nth )\n",
"\n",
" sb.scatterplot(data=self.devs, x=key2, y=key, hue=hue)\n",
" plt.legend(loc='lower center', bbox_to_anchor=(1.5,0)) \n", " plt.legend(loc='lower center', bbox_to_anchor=(1.5,0)) \n",
" title = \"Annual Salary of %s Developers Over Years of Experience\" % self.language\\\n", " title = \"Annual Salary of %s Developers Over Years of Experience\" % self.language\\\n",
" + \"\\nsample size=%i\" % len (self.devs)\\\n", " + \"\\nsample size=%i\" % len (self.devs)\\\n",
" + \"\\ncountry=%s\" % self.country\n", " + \"\\ncountry=%s\" % self.country\n",
" plt.title(title)\n", " plt.title(title)\n",
"\n", "\n",
" def run_regression(self, split=train_test_split, \n", " def run_regression(self, model=LinearRegression(), split=train_test_split, \n",
" x_transform=None, change_base=None, x_shift=0,\n", " x_transform=None, change_base=None, x_shift=0, y_shift=0,\n",
" line_color='red'):\n", " line_color='red', random=333):\n",
" df = self.devs # .sort_values(by = self.key2)\n", " df = self.devs # .sort_values(by = self.key2)\n",
"# df['binned'] = pd.qcut(df[self.key], q=4, labels=False)\n", " X = df[self.key_x].to_frame()\n",
" X = df[self.key2].to_frame() + x_shift\n",
" if x_transform is not None and change_base is not None:\n", " if x_transform is not None and change_base is not None:\n",
" X = x_transform (X, a=change_base ) \n", " X = x_transform (X, a=change_base ) \n",
" elif x_transform is not None:\n", " elif x_transform is not None:\n",
" X = x_transform (X) \n", " X = x_transform (X) \n",
"\n", " X = X + x_shift\n",
" y = df[self.key].to_frame()\n", " y = df[self.key_y].to_frame() + y_shift\n",
"# y = df['binned']\n",
" \n", " \n",
" X_train, X_test, y_train, y_test = split(X, y, test_size=0.2, random_state=999)\n", " X_train, X_test, y_train, y_test = split(X, y, test_size=0.2, random_state=random)\n",
"\n", "\n",
" model = LinearRegression()\n",
" model.fit(X_train, y_train)\n", " model.fit(X_train, y_train)\n",
" y_pred = model.predict(X_test)\n", " y_pred = model.predict(X_test)\n",
"\n", "\n",
" print(\"+----------------------+\")\n", " print(\"+----------------------+\")\n",
" print(\"%s regression line for %s\" % (line_color, self.language))\n",
" print(\"coefficient =\", model.coef_)\n", " print(\"coefficient =\", model.coef_)\n",
" print('intercept=', model.intercept_)\n", " print('intercept=', model.intercept_)\n",
" rmse = root_mean_squared_error(y_test, y_pred)\n", " rmse = root_mean_squared_error(y_test, y_pred)\n",
" print(\"rmse = \", rmse)\n", " print(\"rmse = \", rmse)\n",
" r2 = r2_score(y_test, y_pred)\n",
" print(\"r2 score = \", r2)\n",
" print(\"sample predictions:\")\n", " print(\"sample predictions:\")\n",
" print(y_pred[3:6])\n", " print(y_pred[3:6])\n",
" print(\"+----------------------+\")\n", " print(\"+----------------------+\")\n",
" \n", " b = model.intercept_[0]\n",
"\n",
" plt.figure(self.canvas)\n", " plt.figure(self.canvas)\n",
" plt.xlim(left=0, right=40) # Adjust these values as needed\n",
" plt.plot(X_test, y_pred, color=line_color, label='Regression Line')\n", " plt.plot(X_test, y_pred, color=line_color, label='Regression Line')\n",
" plt.axhline(y=b, color=\"purple\", linestyle='--', \n",
" label=\"b=%0.2f\" % b, zorder=-1 )\n",
" plt.legend(loc='lower center', bbox_to_anchor=(1.5,0)) \n", " plt.legend(loc='lower center', bbox_to_anchor=(1.5,0)) \n",
" del y_pred, model\n", " del y_pred, model\n",
"\n", "\n",
"\n", "\n",
" def export_image(self, filename = \"images/programmers-%s-%s.png\"):\n", " def export_image(self, base_filename = \"images/programmers-%s-%s.png\"):\n",
" plt.figure(self.canvas)\n", " plt.figure(self.canvas)\n",
" plt.savefig(filename % (self.language, self.country), bbox_inches='tight')\n", " filename = base_filename % (self.language, self.country)\n",
" plt.savefig(filename.replace(' ', '-'), bbox_inches='tight')\n",
"\n", "\n",
"# the higher a is, the steeper the line gets\n", "# the higher a is, the steeper the line gets\n",
"def log_base_a(x, a=1.07):\n", "def log_base_a(x, a=1.07):\n",
@@ -220,7 +217,6 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"\n",
"\n", "\n",
"# expected python jobs\n", "# expected python jobs\n",
"pyjobs = [\"Data scientist or machine learning specialist\",\n", "pyjobs = [\"Data scientist or machine learning specialist\",\n",
@@ -230,15 +226,36 @@
"# \"Developer, QA or test\"\n", "# \"Developer, QA or test\"\n",
"]\n", "]\n",
"\n", "\n",
"python = Foo(so_df, \"Python\", jobs=pyjobs, n_rich_outliers=9, n_poor_outliers=2)\n", "python = Foo(so_df, \"Python\", jobs=pyjobs, n_rich_outliers=12, n_poor_outliers=2)\n",
"python.visualize(hue=\"DevType\")\n", "python.visualize(hue=\"DevType\", palette=[\"#dbdb32\", \"#34bf65\", \"#ac70e0\"])\n",
"# earnings vary widely after the first year\n", "python.run_regression()\n",
"python.run_regression( x_transform=log_base_a, x_shift=1)\n", "python.run_regression( x_transform=log_base_a, change_base=1.20, \n",
"python.run_regression( x_transform=log_base_a, change_base=1.2, x_shift=1, line_color='magenta')\n", " x_shift=0, y_shift=-1.5e4, line_color='cyan', random=888)\n",
"python.run_regression( x_transform=log_base_a, change_base=1.12, x_shift=1, line_color='lightgreen')\n",
"python.export_image()" "python.export_image()"
] ]
}, },
{
"cell_type": "markdown",
"id": "f4e3516e-ffe3-4768-ae92-e5cb0be503f8",
"metadata": {},
"source": [
"## Business Understanding\n",
"\n",
"* For Python programmers specialized in data e.g data scientists, engineers, or analysts, a linear model moderately fits the relationship between income and years of experience. For data engineers and scientists, there is a possible divergence within the career path at 10 years of experience. \n",
"\n",
"* The typical starting salary within the field of data science is around 100,000 to 120,000 dollars.\n",
"\n",
"* The income of a data professional can either increase by 2,000 per year (red) or 10,000 per year (cyan).\n",
"\n",
"* For both models, the r2 score ranges from poor to moderate = 0.20 - 0.37 depending on the random number. The variability not explained by the model could be the result of the fields such as advertising, finance, or bio/medical technology.\n",
"\n",
"* For any given point in the career, the model is off by 39,000 or 42,000 dollars.\n",
"\n",
"Generally, for low uncomes poorly explained by the model, the cause could be getting a new job after a year of unemployment, internships, or part-time positions. For high incomes poorly explained by the model, the cause could be professionals at large companies who had recently added a programming language to their skill set. Other causes could be company size or working hours.\n",
"\n",
"(Business understanding was done for C first. Questions are there.)"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
@@ -247,18 +264,57 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"# expected C jobs\n", "# expected C jobs\n",
"cjobs = [\"Developer, embedded applications or devices\", \n", "cjobs = [\n",
" \"Developer, game or graphics\",\n", " \"Developer, embedded applications or devices\", \n",
" \"Hardware Engineer\" ,\n", " \"Developer, game or graphics\",\n",
" \"Hardware Engineer\" ,\n",
" # \"Project manager\", \n", " # \"Project manager\", \n",
" # \"Product manager\"\n", " # \"Product manager\"\n",
"]\n", "]\n",
"c = Foo(so_df, \"C\", jobs=cjobs, n_rich_outliers=11)\n", "c = Foo(so_df, \"C\", jobs=cjobs, n_rich_outliers=30, n_poor_outliers=2)\n",
"c.visualize(n_lowest=3, hue=\"DevType\")\n", "c.visualize(hue=\"DevType\", palette=[\"#57e6da\",\"#d9e352\",\"#cc622d\"] ) \n",
"c.run_regression(x_transform=log_base_a, change_base=1.25)\n", "c.run_regression()\n",
"c.run_regression(x_transform=log_base_a, change_base=1.3, \n",
" x_shift=2, y_shift=-5000, line_color=\"magenta\", random=555)\n",
"c.export_image()" "c.export_image()"
] ]
}, },
{
"cell_type": "markdown",
"id": "89d86a1e-dc65-48e4-adcf-bb10188fd0b7",
"metadata": {},
"source": [
"## Business Understanding\n",
"\n",
"1. For C programmers, specifically embedded systems, graphics, and hardware engineers, a linear model fits the relationship of income and years of experience.\n",
"\n",
"2. A coefficient = 11973.469 indicates that for each year of experience, a C programmer typically earns an additional $12,000 per year.\n",
"\n",
"3. Because the graph looks like a spray of water, after 10 years of experience, the salaries for C programming professionals strongly vary.\n",
"\n",
"4. A junior C programmer, at 2 years of experience, typically earns $54,776.266 per year.\n",
"\n",
"5. An r2 score = 0.571 indicates a bit over half of the variability in data is explained by the independent variable. This however is only for incomes below 200,000 dollars. Some participants with 5 years of professional experience were reporting incomes at or around $200,000. These were considered unusual outliers. Among the game developers, they may have independently released a game.\n",
"\n",
"rmse = 21198.612 indicates the model is off by around 21,000 dollars for a given point in a career.\n",
"\n",
"### Questions that can be answered\n",
"\n",
"* Is there a linear relationship between income and years of experience.\n",
"* Is there a point in a career where raises stop ocurring?\n",
"* What is the typical salary of a entry-level or junior C programmer?\n",
"* How much more do C programmers earn for each year of experience?\n",
"* How much of the variability is explained by the model and what factors are not considered?\n"
]
},
{
"cell_type": "markdown",
"id": "928f421c-1f2b-4be1-9ce2-8f3593a9a823",
"metadata": {},
"source": [
"Below cells generate extra or unused graphs."
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
@@ -584,14 +640,6 @@
"# 55-64 years old\n", "# 55-64 years old\n",
"# 65 years or older" "# 65 years or older"
] ]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b954a811-e401-48dc-9ba4-263a5f2cf5c5",
"metadata": {},
"outputs": [],
"source": []
} }
], ],
"metadata": { "metadata": {